I0720 01:43:04.572829 8 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0720 01:43:04.573246 8 e2e.go:129] Starting e2e run "d50aa47d-1e93-455b-a070-ce7baf916b94" on Ginkgo node 1 {"msg":"Test Suite starting","total":294,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595209383 - Will randomize all specs Will run 294 of 5214 specs Jul 20 01:43:04.639: INFO: >>> kubeConfig: /root/.kube/config Jul 20 01:43:04.643: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 20 01:43:04.697: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 20 01:43:04.727: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 20 01:43:04.727: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 20 01:43:04.727: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 20 01:43:04.733: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 20 01:43:04.733: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 20 01:43:04.733: INFO: e2e test version: v1.20.0-alpha.0.4+2d327ac4558d78 Jul 20 01:43:04.734: INFO: kube-apiserver version: v1.19.0-rc.1 Jul 20 01:43:04.734: INFO: >>> kubeConfig: /root/.kube/config Jul 20 01:43:04.737: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:43:04.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test Jul 20 01:43:04.789: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:43:04.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3332" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":294,"completed":1,"skipped":31,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:43:04.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 20 01:43:04.993: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Jul 20 01:43:05.617: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 20 01:43:08.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806185, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806185, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806185, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806185, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5985bbd468\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 01:43:10.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806185, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806185, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806185, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806185, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5985bbd468\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 01:43:12.779: INFO: Waited 723.922699ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:43:13.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6277" for this suite. • [SLOW TEST:8.721 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":294,"completed":2,"skipped":34,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:43:13.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9351 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jul 20 01:43:14.024: INFO: Found 0 stateful pods, waiting for 3 Jul 20 01:43:24.045: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 01:43:24.045: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 01:43:24.045: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 20 01:43:34.030: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 01:43:34.030: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 01:43:34.030: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 20 01:43:34.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9351 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 01:43:40.422: INFO: stderr: "I0720 01:43:40.307602 28 log.go:181] (0xc00011fa20) (0xc000ac1900) Create stream\nI0720 01:43:40.307661 28 log.go:181] (0xc00011fa20) (0xc000ac1900) Stream added, broadcasting: 1\nI0720 01:43:40.309618 28 log.go:181] (0xc00011fa20) Reply frame received for 1\nI0720 01:43:40.309649 28 log.go:181] (0xc00011fa20) (0xc000aaf400) Create stream\nI0720 01:43:40.309660 28 log.go:181] (0xc00011fa20) (0xc000aaf400) Stream added, broadcasting: 3\nI0720 01:43:40.310513 28 log.go:181] (0xc00011fa20) Reply frame received for 3\nI0720 01:43:40.310560 28 log.go:181] (0xc00011fa20) (0xc000ac19a0) Create stream\nI0720 01:43:40.310579 28 log.go:181] (0xc00011fa20) (0xc000ac19a0) Stream added, broadcasting: 5\nI0720 01:43:40.311476 28 log.go:181] (0xc00011fa20) Reply frame received for 5\nI0720 01:43:40.384391 28 log.go:181] (0xc00011fa20) Data frame received for 5\nI0720 01:43:40.384439 28 log.go:181] (0xc000ac19a0) (5) Data frame handling\nI0720 01:43:40.384469 28 log.go:181] (0xc000ac19a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 01:43:40.413897 28 log.go:181] (0xc00011fa20) Data frame received for 3\nI0720 01:43:40.413922 28 log.go:181] (0xc000aaf400) (3) Data frame handling\nI0720 01:43:40.413932 28 log.go:181] (0xc000aaf400) (3) Data frame sent\nI0720 01:43:40.413940 28 log.go:181] (0xc00011fa20) Data frame received for 3\nI0720 01:43:40.413946 28 log.go:181] (0xc000aaf400) (3) Data frame handling\nI0720 01:43:40.413970 28 log.go:181] (0xc00011fa20) Data frame received for 5\nI0720 01:43:40.413978 28 log.go:181] (0xc000ac19a0) (5) Data frame handling\nI0720 01:43:40.416285 28 log.go:181] (0xc00011fa20) Data frame received for 1\nI0720 01:43:40.416321 28 log.go:181] (0xc000ac1900) (1) Data frame handling\nI0720 01:43:40.416344 28 log.go:181] (0xc000ac1900) (1) Data frame sent\nI0720 01:43:40.416377 28 log.go:181] (0xc00011fa20) (0xc000ac1900) Stream removed, broadcasting: 1\nI0720 01:43:40.416431 28 log.go:181] (0xc00011fa20) Go away received\nI0720 01:43:40.417085 28 log.go:181] (0xc00011fa20) (0xc000ac1900) Stream removed, broadcasting: 1\nI0720 01:43:40.417111 28 log.go:181] (0xc00011fa20) (0xc000aaf400) Stream removed, broadcasting: 3\nI0720 01:43:40.417123 28 log.go:181] (0xc00011fa20) (0xc000ac19a0) Stream removed, broadcasting: 5\n" Jul 20 01:43:40.422: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 01:43:40.422: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 20 01:43:50.498: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 20 01:44:00.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9351 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 01:44:00.867: INFO: stderr: "I0720 01:44:00.757294 46 log.go:181] (0xc000d17550) (0xc001010280) Create stream\nI0720 01:44:00.757340 46 log.go:181] (0xc000d17550) (0xc001010280) Stream added, broadcasting: 1\nI0720 01:44:00.762642 46 log.go:181] (0xc000d17550) Reply frame received for 1\nI0720 01:44:00.762705 46 log.go:181] (0xc000d17550) (0xc000aaf2c0) Create stream\nI0720 01:44:00.762732 46 log.go:181] (0xc000d17550) (0xc000aaf2c0) Stream added, broadcasting: 3\nI0720 01:44:00.763606 46 log.go:181] (0xc000d17550) Reply frame received for 3\nI0720 01:44:00.763634 46 log.go:181] (0xc000d17550) (0xc000a24820) Create stream\nI0720 01:44:00.763642 46 log.go:181] (0xc000d17550) (0xc000a24820) Stream added, broadcasting: 5\nI0720 01:44:00.764919 46 log.go:181] (0xc000d17550) Reply frame received for 5\nI0720 01:44:00.859268 46 log.go:181] (0xc000d17550) Data frame received for 5\nI0720 01:44:00.859322 46 log.go:181] (0xc000a24820) (5) Data frame handling\nI0720 01:44:00.859346 46 log.go:181] (0xc000a24820) (5) Data frame sent\nI0720 01:44:00.859364 46 log.go:181] (0xc000d17550) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 01:44:00.859397 46 log.go:181] (0xc000d17550) Data frame received for 3\nI0720 01:44:00.859431 46 log.go:181] (0xc000aaf2c0) (3) Data frame handling\nI0720 01:44:00.859445 46 log.go:181] (0xc000aaf2c0) (3) Data frame sent\nI0720 01:44:00.859458 46 log.go:181] (0xc000d17550) Data frame received for 3\nI0720 01:44:00.859469 46 log.go:181] (0xc000aaf2c0) (3) Data frame handling\nI0720 01:44:00.859504 46 log.go:181] (0xc000a24820) (5) Data frame handling\nI0720 01:44:00.862222 46 log.go:181] (0xc000d17550) Data frame received for 1\nI0720 01:44:00.862262 46 log.go:181] (0xc001010280) (1) Data frame handling\nI0720 01:44:00.862298 46 log.go:181] (0xc001010280) (1) Data frame sent\nI0720 01:44:00.862317 46 log.go:181] (0xc000d17550) (0xc001010280) Stream removed, broadcasting: 1\nI0720 01:44:00.862382 46 log.go:181] (0xc000d17550) Go away received\nI0720 01:44:00.862851 46 log.go:181] (0xc000d17550) (0xc001010280) Stream removed, broadcasting: 1\nI0720 01:44:00.862871 46 log.go:181] (0xc000d17550) (0xc000aaf2c0) Stream removed, broadcasting: 3\nI0720 01:44:00.862885 46 log.go:181] (0xc000d17550) (0xc000a24820) Stream removed, broadcasting: 5\n" Jul 20 01:44:00.867: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 01:44:00.867: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 01:44:20.887: INFO: Waiting for StatefulSet statefulset-9351/ss2 to complete update Jul 20 01:44:20.887: INFO: Waiting for Pod statefulset-9351/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jul 20 01:44:30.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9351 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 01:44:31.142: INFO: stderr: "I0720 01:44:31.026691 64 log.go:181] (0xc0007ab8c0) (0xc000c24b40) Create stream\nI0720 01:44:31.026738 64 log.go:181] (0xc0007ab8c0) (0xc000c24b40) Stream added, broadcasting: 1\nI0720 01:44:31.030818 64 log.go:181] (0xc0007ab8c0) Reply frame received for 1\nI0720 01:44:31.030861 64 log.go:181] (0xc0007ab8c0) (0xc000e0a3c0) Create stream\nI0720 01:44:31.030885 64 log.go:181] (0xc0007ab8c0) (0xc000e0a3c0) Stream added, broadcasting: 3\nI0720 01:44:31.031620 64 log.go:181] (0xc0007ab8c0) Reply frame received for 3\nI0720 01:44:31.031648 64 log.go:181] (0xc0007ab8c0) (0xc000c24be0) Create stream\nI0720 01:44:31.031656 64 log.go:181] (0xc0007ab8c0) (0xc000c24be0) Stream added, broadcasting: 5\nI0720 01:44:31.032285 64 log.go:181] (0xc0007ab8c0) Reply frame received for 5\nI0720 01:44:31.097771 64 log.go:181] (0xc0007ab8c0) Data frame received for 5\nI0720 01:44:31.097792 64 log.go:181] (0xc000c24be0) (5) Data frame handling\nI0720 01:44:31.097802 64 log.go:181] (0xc000c24be0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 01:44:31.134951 64 log.go:181] (0xc0007ab8c0) Data frame received for 3\nI0720 01:44:31.134999 64 log.go:181] (0xc000e0a3c0) (3) Data frame handling\nI0720 01:44:31.135021 64 log.go:181] (0xc000e0a3c0) (3) Data frame sent\nI0720 01:44:31.135069 64 log.go:181] (0xc0007ab8c0) Data frame received for 5\nI0720 01:44:31.135106 64 log.go:181] (0xc000c24be0) (5) Data frame handling\nI0720 01:44:31.135306 64 log.go:181] (0xc0007ab8c0) Data frame received for 3\nI0720 01:44:31.135362 64 log.go:181] (0xc000e0a3c0) (3) Data frame handling\nI0720 01:44:31.137597 64 log.go:181] (0xc0007ab8c0) Data frame received for 1\nI0720 01:44:31.137623 64 log.go:181] (0xc000c24b40) (1) Data frame handling\nI0720 01:44:31.137641 64 log.go:181] (0xc000c24b40) (1) Data frame sent\nI0720 01:44:31.137658 64 log.go:181] (0xc0007ab8c0) (0xc000c24b40) Stream removed, broadcasting: 1\nI0720 01:44:31.137685 64 log.go:181] (0xc0007ab8c0) Go away received\nI0720 01:44:31.138066 64 log.go:181] (0xc0007ab8c0) (0xc000c24b40) Stream removed, broadcasting: 1\nI0720 01:44:31.138078 64 log.go:181] (0xc0007ab8c0) (0xc000e0a3c0) Stream removed, broadcasting: 3\nI0720 01:44:31.138083 64 log.go:181] (0xc0007ab8c0) (0xc000c24be0) Stream removed, broadcasting: 5\n" Jul 20 01:44:31.142: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 01:44:31.142: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 01:44:41.178: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 20 01:44:51.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9351 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 01:44:51.680: INFO: stderr: "I0720 01:44:51.603470 82 log.go:181] (0xc000dbafd0) (0xc000f90460) Create stream\nI0720 01:44:51.603533 82 log.go:181] (0xc000dbafd0) (0xc000f90460) Stream added, broadcasting: 1\nI0720 01:44:51.610218 82 log.go:181] (0xc000dbafd0) Reply frame received for 1\nI0720 01:44:51.610262 82 log.go:181] (0xc000dbafd0) (0xc00094d220) Create stream\nI0720 01:44:51.610274 82 log.go:181] (0xc000dbafd0) (0xc00094d220) Stream added, broadcasting: 3\nI0720 01:44:51.611324 82 log.go:181] (0xc000dbafd0) Reply frame received for 3\nI0720 01:44:51.611354 82 log.go:181] (0xc000dbafd0) (0xc0004a2280) Create stream\nI0720 01:44:51.611363 82 log.go:181] (0xc000dbafd0) (0xc0004a2280) Stream added, broadcasting: 5\nI0720 01:44:51.612174 82 log.go:181] (0xc000dbafd0) Reply frame received for 5\nI0720 01:44:51.673557 82 log.go:181] (0xc000dbafd0) Data frame received for 3\nI0720 01:44:51.673606 82 log.go:181] (0xc00094d220) (3) Data frame handling\nI0720 01:44:51.673621 82 log.go:181] (0xc00094d220) (3) Data frame sent\nI0720 01:44:51.673630 82 log.go:181] (0xc000dbafd0) Data frame received for 3\nI0720 01:44:51.673637 82 log.go:181] (0xc00094d220) (3) Data frame handling\nI0720 01:44:51.673664 82 log.go:181] (0xc000dbafd0) Data frame received for 5\nI0720 01:44:51.673675 82 log.go:181] (0xc0004a2280) (5) Data frame handling\nI0720 01:44:51.673691 82 log.go:181] (0xc0004a2280) (5) Data frame sent\nI0720 01:44:51.673714 82 log.go:181] (0xc000dbafd0) Data frame received for 5\nI0720 01:44:51.673722 82 log.go:181] (0xc0004a2280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 01:44:51.674960 82 log.go:181] (0xc000dbafd0) Data frame received for 1\nI0720 01:44:51.674975 82 log.go:181] (0xc000f90460) (1) Data frame handling\nI0720 01:44:51.674990 82 log.go:181] (0xc000f90460) (1) Data frame sent\nI0720 01:44:51.675091 82 log.go:181] (0xc000dbafd0) (0xc000f90460) Stream removed, broadcasting: 1\nI0720 01:44:51.675135 82 log.go:181] (0xc000dbafd0) Go away received\nI0720 01:44:51.675366 82 log.go:181] (0xc000dbafd0) (0xc000f90460) Stream removed, broadcasting: 1\nI0720 01:44:51.675379 82 log.go:181] (0xc000dbafd0) (0xc00094d220) Stream removed, broadcasting: 3\nI0720 01:44:51.675385 82 log.go:181] (0xc000dbafd0) (0xc0004a2280) Stream removed, broadcasting: 5\n" Jul 20 01:44:51.680: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 01:44:51.680: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 01:45:01.760: INFO: Waiting for StatefulSet statefulset-9351/ss2 to complete update Jul 20 01:45:01.760: INFO: Waiting for Pod statefulset-9351/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 20 01:45:01.760: INFO: Waiting for Pod statefulset-9351/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 20 01:45:01.760: INFO: Waiting for Pod statefulset-9351/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 20 01:45:11.807: INFO: Waiting for StatefulSet statefulset-9351/ss2 to complete update Jul 20 01:45:11.807: INFO: Waiting for Pod statefulset-9351/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 20 01:45:21.768: INFO: Deleting all statefulset in ns statefulset-9351 Jul 20 01:45:21.771: INFO: Scaling statefulset ss2 to 0 Jul 20 01:45:51.848: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 01:45:51.851: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:45:51.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9351" for this suite. • [SLOW TEST:158.339 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":294,"completed":3,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:45:51.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-5e4457cb-1c40-4afe-9f7a-7d58de6279c0 STEP: Creating a pod to test consume secrets Jul 20 01:45:52.065: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0a1cb14-d417-44ca-ad89-5f38ceb842b5" in namespace "projected-5985" to be "Succeeded or Failed" Jul 20 01:45:52.124: INFO: Pod "pod-projected-secrets-a0a1cb14-d417-44ca-ad89-5f38ceb842b5": Phase="Pending", Reason="", readiness=false. Elapsed: 59.68929ms Jul 20 01:45:54.303: INFO: Pod "pod-projected-secrets-a0a1cb14-d417-44ca-ad89-5f38ceb842b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238256003s Jul 20 01:45:56.346: INFO: Pod "pod-projected-secrets-a0a1cb14-d417-44ca-ad89-5f38ceb842b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.281471398s STEP: Saw pod success Jul 20 01:45:56.346: INFO: Pod "pod-projected-secrets-a0a1cb14-d417-44ca-ad89-5f38ceb842b5" satisfied condition "Succeeded or Failed" Jul 20 01:45:56.349: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-a0a1cb14-d417-44ca-ad89-5f38ceb842b5 container projected-secret-volume-test: STEP: delete the pod Jul 20 01:45:56.579: INFO: Waiting for pod pod-projected-secrets-a0a1cb14-d417-44ca-ad89-5f38ceb842b5 to disappear Jul 20 01:45:56.583: INFO: Pod pod-projected-secrets-a0a1cb14-d417-44ca-ad89-5f38ceb842b5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:45:56.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5985" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":4,"skipped":87,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:45:56.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-71c0889b-7121-4826-8dee-1a494c691682 STEP: Creating a pod to test consume configMaps Jul 20 01:45:57.100: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ebc16f1-bab1-475f-970e-2248100c725f" in namespace "configmap-2541" to be "Succeeded or Failed" Jul 20 01:45:57.150: INFO: Pod "pod-configmaps-3ebc16f1-bab1-475f-970e-2248100c725f": Phase="Pending", Reason="", readiness=false. Elapsed: 50.295023ms Jul 20 01:45:59.156: INFO: Pod "pod-configmaps-3ebc16f1-bab1-475f-970e-2248100c725f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05611861s Jul 20 01:46:01.160: INFO: Pod "pod-configmaps-3ebc16f1-bab1-475f-970e-2248100c725f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059731454s Jul 20 01:46:03.196: INFO: Pod "pod-configmaps-3ebc16f1-bab1-475f-970e-2248100c725f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095734751s STEP: Saw pod success Jul 20 01:46:03.196: INFO: Pod "pod-configmaps-3ebc16f1-bab1-475f-970e-2248100c725f" satisfied condition "Succeeded or Failed" Jul 20 01:46:03.199: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3ebc16f1-bab1-475f-970e-2248100c725f container configmap-volume-test: STEP: delete the pod Jul 20 01:46:03.294: INFO: Waiting for pod pod-configmaps-3ebc16f1-bab1-475f-970e-2248100c725f to disappear Jul 20 01:46:03.363: INFO: Pod pod-configmaps-3ebc16f1-bab1-475f-970e-2248100c725f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:46:03.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2541" for this suite. • [SLOW TEST:6.609 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":5,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:46:03.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 20 01:46:03.466: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 01:46:03.493: INFO: Waiting for terminating namespaces to be deleted... Jul 20 01:46:03.496: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 20 01:46:03.501: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 01:46:03.501: INFO: Container coredns ready: true, restart count 0 Jul 20 01:46:03.501: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Jul 20 01:46:03.501: INFO: Container coredns ready: true, restart count 0 Jul 20 01:46:03.501: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 01:46:03.501: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 01:46:03.501: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 01:46:03.501: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 01:46:03.501: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 01:46:03.501: INFO: Container local-path-provisioner ready: true, restart count 0 Jul 20 01:46:03.501: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 20 01:46:03.505: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 01:46:03.505: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 01:46:03.505: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 01:46:03.505: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Jul 20 01:46:03.569: INFO: Pod coredns-f9fd979d6-s745j requesting resource cpu=100m on Node latest-worker Jul 20 01:46:03.569: INFO: Pod coredns-f9fd979d6-zs4sj requesting resource cpu=100m on Node latest-worker Jul 20 01:46:03.569: INFO: Pod kindnet-46dnt requesting resource cpu=100m on Node latest-worker Jul 20 01:46:03.569: INFO: Pod kindnet-g6zbt requesting resource cpu=100m on Node latest-worker2 Jul 20 01:46:03.569: INFO: Pod kube-proxy-nsnzn requesting resource cpu=0m on Node latest-worker2 Jul 20 01:46:03.569: INFO: Pod kube-proxy-sxpg9 requesting resource cpu=0m on Node latest-worker Jul 20 01:46:03.569: INFO: Pod local-path-provisioner-8b46957d4-2gzpd requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Jul 20 01:46:03.569: INFO: Creating a pod which consumes cpu=10990m on Node latest-worker Jul 20 01:46:03.575: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-bd560065-12bc-4b2a-8a23-abf84046d1eb.162352a68c9d1f3e], Reason = [Started], Message = [Started container filler-pod-bd560065-12bc-4b2a-8a23-abf84046d1eb] STEP: Considering event: Type = [Normal], Name = [filler-pod-dbb604dc-ff3c-4326-ae9a-70b41667b8b4.162352a5c2a246ec], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd560065-12bc-4b2a-8a23-abf84046d1eb.162352a5764df926], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8472/filler-pod-bd560065-12bc-4b2a-8a23-abf84046d1eb to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd560065-12bc-4b2a-8a23-abf84046d1eb.162352a67a516230], Reason = [Created], Message = [Created container filler-pod-bd560065-12bc-4b2a-8a23-abf84046d1eb] STEP: Considering event: Type = [Normal], Name = [filler-pod-bd560065-12bc-4b2a-8a23-abf84046d1eb.162352a61617212a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-dbb604dc-ff3c-4326-ae9a-70b41667b8b4.162352a573ae3f63], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8472/filler-pod-dbb604dc-ff3c-4326-ae9a-70b41667b8b4 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-dbb604dc-ff3c-4326-ae9a-70b41667b8b4.162352a63f45f1b9], Reason = [Created], Message = [Created container filler-pod-dbb604dc-ff3c-4326-ae9a-70b41667b8b4] STEP: Considering event: Type = [Normal], Name = [filler-pod-dbb604dc-ff3c-4326-ae9a-70b41667b8b4.162352a6651087b7], Reason = [Started], Message = [Started container filler-pod-dbb604dc-ff3c-4326-ae9a-70b41667b8b4] STEP: Considering event: Type = [Warning], Name = [additional-pod.162352a6ec416877], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:46:11.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8472" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.675 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":294,"completed":6,"skipped":126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:46:11.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-b71f844e-517f-4f45-8353-87f8b47ebccd STEP: Creating a pod to test consume secrets Jul 20 01:46:11.190: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0538226a-a9cd-4903-9c61-4758c5e9963e" in namespace "projected-9745" to be "Succeeded or Failed" Jul 20 01:46:11.276: INFO: Pod "pod-projected-secrets-0538226a-a9cd-4903-9c61-4758c5e9963e": Phase="Pending", Reason="", readiness=false. Elapsed: 86.011596ms Jul 20 01:46:13.544: INFO: Pod "pod-projected-secrets-0538226a-a9cd-4903-9c61-4758c5e9963e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353346842s Jul 20 01:46:15.561: INFO: Pod "pod-projected-secrets-0538226a-a9cd-4903-9c61-4758c5e9963e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370979248s Jul 20 01:46:17.765: INFO: Pod "pod-projected-secrets-0538226a-a9cd-4903-9c61-4758c5e9963e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.574781673s STEP: Saw pod success Jul 20 01:46:17.765: INFO: Pod "pod-projected-secrets-0538226a-a9cd-4903-9c61-4758c5e9963e" satisfied condition "Succeeded or Failed" Jul 20 01:46:17.768: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-0538226a-a9cd-4903-9c61-4758c5e9963e container projected-secret-volume-test: STEP: delete the pod Jul 20 01:46:18.289: INFO: Waiting for pod pod-projected-secrets-0538226a-a9cd-4903-9c61-4758c5e9963e to disappear Jul 20 01:46:18.324: INFO: Pod pod-projected-secrets-0538226a-a9cd-4903-9c61-4758c5e9963e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:46:18.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9745" for this suite. • [SLOW TEST:7.284 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":7,"skipped":154,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:46:18.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:307 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jul 20 01:46:18.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8607' Jul 20 01:46:19.181: INFO: stderr: "" Jul 20 01:46:19.181: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 20 01:46:19.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8607' Jul 20 01:46:19.708: INFO: stderr: "" Jul 20 01:46:19.708: INFO: stdout: "update-demo-nautilus-l2xzg update-demo-nautilus-swspw " Jul 20 01:46:19.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l2xzg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8607' Jul 20 01:46:19.904: INFO: stderr: "" Jul 20 01:46:19.905: INFO: stdout: "" Jul 20 01:46:19.905: INFO: update-demo-nautilus-l2xzg is created but not running Jul 20 01:46:24.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8607' Jul 20 01:46:25.001: INFO: stderr: "" Jul 20 01:46:25.001: INFO: stdout: "update-demo-nautilus-l2xzg update-demo-nautilus-swspw " Jul 20 01:46:25.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l2xzg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8607' Jul 20 01:46:25.095: INFO: stderr: "" Jul 20 01:46:25.095: INFO: stdout: "true" Jul 20 01:46:25.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l2xzg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8607' Jul 20 01:46:25.305: INFO: stderr: "" Jul 20 01:46:25.305: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 01:46:25.305: INFO: validating pod update-demo-nautilus-l2xzg Jul 20 01:46:25.370: INFO: got data: { "image": "nautilus.jpg" } Jul 20 01:46:25.370: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 01:46:25.370: INFO: update-demo-nautilus-l2xzg is verified up and running Jul 20 01:46:25.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-swspw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8607' Jul 20 01:46:25.480: INFO: stderr: "" Jul 20 01:46:25.480: INFO: stdout: "true" Jul 20 01:46:25.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-swspw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8607' Jul 20 01:46:25.612: INFO: stderr: "" Jul 20 01:46:25.612: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 01:46:25.612: INFO: validating pod update-demo-nautilus-swspw Jul 20 01:46:25.616: INFO: got data: { "image": "nautilus.jpg" } Jul 20 01:46:25.616: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 01:46:25.616: INFO: update-demo-nautilus-swspw is verified up and running STEP: using delete to clean up resources Jul 20 01:46:25.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8607' Jul 20 01:46:25.768: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 01:46:25.768: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 20 01:46:25.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8607' Jul 20 01:46:25.872: INFO: stderr: "No resources found in kubectl-8607 namespace.\n" Jul 20 01:46:25.872: INFO: stdout: "" Jul 20 01:46:25.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8607 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 20 01:46:25.995: INFO: stderr: "" Jul 20 01:46:25.995: INFO: stdout: "update-demo-nautilus-l2xzg\nupdate-demo-nautilus-swspw\n" Jul 20 01:46:26.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8607' Jul 20 01:46:26.641: INFO: stderr: "No resources found in kubectl-8607 namespace.\n" Jul 20 01:46:26.641: INFO: stdout: "" Jul 20 01:46:26.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8607 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 20 01:46:27.040: INFO: stderr: "" Jul 20 01:46:27.040: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:46:27.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8607" for this suite. • [SLOW TEST:9.303 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:305 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":294,"completed":8,"skipped":165,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:46:27.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-dfd5735d-31c6-4b31-b52c-73579c0d6e80 STEP: Creating a pod to test consume configMaps Jul 20 01:46:27.912: INFO: Waiting up to 5m0s for pod "pod-configmaps-c80ae0e2-48eb-4c8a-8367-ecdafb137fb5" in namespace "configmap-9552" to be "Succeeded or Failed" Jul 20 01:46:27.953: INFO: Pod "pod-configmaps-c80ae0e2-48eb-4c8a-8367-ecdafb137fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 41.853765ms Jul 20 01:46:30.293: INFO: Pod "pod-configmaps-c80ae0e2-48eb-4c8a-8367-ecdafb137fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38093839s Jul 20 01:46:32.295: INFO: Pod "pod-configmaps-c80ae0e2-48eb-4c8a-8367-ecdafb137fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383677072s Jul 20 01:46:34.299: INFO: Pod "pod-configmaps-c80ae0e2-48eb-4c8a-8367-ecdafb137fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.387505504s STEP: Saw pod success Jul 20 01:46:34.299: INFO: Pod "pod-configmaps-c80ae0e2-48eb-4c8a-8367-ecdafb137fb5" satisfied condition "Succeeded or Failed" Jul 20 01:46:34.302: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c80ae0e2-48eb-4c8a-8367-ecdafb137fb5 container configmap-volume-test: STEP: delete the pod Jul 20 01:46:34.482: INFO: Waiting for pod pod-configmaps-c80ae0e2-48eb-4c8a-8367-ecdafb137fb5 to disappear Jul 20 01:46:34.530: INFO: Pod pod-configmaps-c80ae0e2-48eb-4c8a-8367-ecdafb137fb5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:46:34.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9552" for this suite. • [SLOW TEST:6.964 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":9,"skipped":176,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:46:34.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 01:46:34.765: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-77c50b0f-26e5-46cc-9b6a-486f29527511" in namespace "security-context-test-2444" to be "Succeeded or Failed" Jul 20 01:46:34.817: INFO: Pod "alpine-nnp-false-77c50b0f-26e5-46cc-9b6a-486f29527511": Phase="Pending", Reason="", readiness=false. Elapsed: 51.581878ms Jul 20 01:46:36.894: INFO: Pod "alpine-nnp-false-77c50b0f-26e5-46cc-9b6a-486f29527511": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128934422s Jul 20 01:46:38.897: INFO: Pod "alpine-nnp-false-77c50b0f-26e5-46cc-9b6a-486f29527511": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132364718s Jul 20 01:46:38.897: INFO: Pod "alpine-nnp-false-77c50b0f-26e5-46cc-9b6a-486f29527511" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:46:38.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2444" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":10,"skipped":186,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:46:38.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-0a0256c3-8d1b-4dd6-bec8-9410c13146a7 STEP: Creating a pod to test consume configMaps Jul 20 01:46:39.133: INFO: Waiting up to 5m0s for pod "pod-configmaps-54dd6846-54c8-4e14-8cb4-6555e8743270" in namespace "configmap-1181" to be "Succeeded or Failed" Jul 20 01:46:39.139: INFO: Pod "pod-configmaps-54dd6846-54c8-4e14-8cb4-6555e8743270": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12734ms Jul 20 01:46:41.200: INFO: Pod "pod-configmaps-54dd6846-54c8-4e14-8cb4-6555e8743270": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066471125s Jul 20 01:46:43.204: INFO: Pod "pod-configmaps-54dd6846-54c8-4e14-8cb4-6555e8743270": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070162116s STEP: Saw pod success Jul 20 01:46:43.204: INFO: Pod "pod-configmaps-54dd6846-54c8-4e14-8cb4-6555e8743270" satisfied condition "Succeeded or Failed" Jul 20 01:46:43.206: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-54dd6846-54c8-4e14-8cb4-6555e8743270 container configmap-volume-test: STEP: delete the pod Jul 20 01:46:43.247: INFO: Waiting for pod pod-configmaps-54dd6846-54c8-4e14-8cb4-6555e8743270 to disappear Jul 20 01:46:43.271: INFO: Pod pod-configmaps-54dd6846-54c8-4e14-8cb4-6555e8743270 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:46:43.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1181" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":11,"skipped":208,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:46:43.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 20 01:46:48.222: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5030b86b-f4f8-47e1-8495-216bf1f8b0a0" Jul 20 01:46:48.222: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5030b86b-f4f8-47e1-8495-216bf1f8b0a0" in namespace "pods-218" to be "terminated due to deadline exceeded" Jul 20 01:46:48.231: INFO: Pod "pod-update-activedeadlineseconds-5030b86b-f4f8-47e1-8495-216bf1f8b0a0": Phase="Running", Reason="", readiness=true. Elapsed: 9.237569ms Jul 20 01:46:50.235: INFO: Pod "pod-update-activedeadlineseconds-5030b86b-f4f8-47e1-8495-216bf1f8b0a0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.013502012s Jul 20 01:46:50.235: INFO: Pod "pod-update-activedeadlineseconds-5030b86b-f4f8-47e1-8495-216bf1f8b0a0" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:46:50.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-218" for this suite. • [SLOW TEST:6.964 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":294,"completed":12,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:46:50.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8654.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8654.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8654.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8654.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8654.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8654.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 4.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.4_udp@PTR;check="$$(dig +tcp +noall +answer +search 4.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.4_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8654.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8654.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8654.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8654.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8654.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8654.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8654.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 4.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.4_udp@PTR;check="$$(dig +tcp +noall +answer +search 4.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.4_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 01:46:56.498: INFO: Unable to read wheezy_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:46:56.508: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:46:56.510: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:46:56.513: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:46:56.758: INFO: Unable to read jessie_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:46:56.765: INFO: Unable to read jessie_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:46:56.768: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:46:56.771: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:46:56.789: INFO: Lookups using dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a failed for: [wheezy_udp@dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_udp@dns-test-service.dns-8654.svc.cluster.local jessie_tcp@dns-test-service.dns-8654.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local] Jul 20 01:47:01.795: INFO: Unable to read wheezy_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:01.802: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:01.805: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:01.807: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:01.822: INFO: Unable to read jessie_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:01.825: INFO: Unable to read jessie_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:01.827: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:01.829: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:01.846: INFO: Lookups using dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a failed for: [wheezy_udp@dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_udp@dns-test-service.dns-8654.svc.cluster.local jessie_tcp@dns-test-service.dns-8654.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local] Jul 20 01:47:06.794: INFO: Unable to read wheezy_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:06.798: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:06.801: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:06.804: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:06.825: INFO: Unable to read jessie_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:06.828: INFO: Unable to read jessie_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:06.831: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:06.835: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:06.854: INFO: Lookups using dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a failed for: [wheezy_udp@dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_udp@dns-test-service.dns-8654.svc.cluster.local jessie_tcp@dns-test-service.dns-8654.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local] Jul 20 01:47:11.795: INFO: Unable to read wheezy_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:11.798: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:11.801: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:11.804: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:11.820: INFO: Unable to read jessie_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:11.823: INFO: Unable to read jessie_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:11.825: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:11.828: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:11.845: INFO: Lookups using dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a failed for: [wheezy_udp@dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_udp@dns-test-service.dns-8654.svc.cluster.local jessie_tcp@dns-test-service.dns-8654.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local] Jul 20 01:47:16.794: INFO: Unable to read wheezy_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:16.798: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:16.802: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:16.805: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:16.830: INFO: Unable to read jessie_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:16.833: INFO: Unable to read jessie_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:16.836: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:16.839: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:16.859: INFO: Lookups using dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a failed for: [wheezy_udp@dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_udp@dns-test-service.dns-8654.svc.cluster.local jessie_tcp@dns-test-service.dns-8654.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local] Jul 20 01:47:21.794: INFO: Unable to read wheezy_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:21.798: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:21.800: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:21.803: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:21.825: INFO: Unable to read jessie_udp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:21.827: INFO: Unable to read jessie_tcp@dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:21.830: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:21.833: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local from pod dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a: the server could not find the requested resource (get pods dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a) Jul 20 01:47:21.851: INFO: Lookups using dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a failed for: [wheezy_udp@dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@dns-test-service.dns-8654.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_udp@dns-test-service.dns-8654.svc.cluster.local jessie_tcp@dns-test-service.dns-8654.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8654.svc.cluster.local] Jul 20 01:47:26.916: INFO: DNS probes using dns-8654/dns-test-c72f48bf-40dc-4a06-86cf-29119df1721a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:47:27.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8654" for this suite. • [SLOW TEST:37.660 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":294,"completed":13,"skipped":265,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:47:27.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:47:45.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-445" for this suite. • [SLOW TEST:17.123 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":294,"completed":14,"skipped":278,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:47:45.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 01:47:45.115: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2467df3b-0812-4ab4-97d2-72b07977d7a8" in namespace "projected-1828" to be "Succeeded or Failed" Jul 20 01:47:45.132: INFO: Pod "downwardapi-volume-2467df3b-0812-4ab4-97d2-72b07977d7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.271944ms Jul 20 01:47:47.155: INFO: Pod "downwardapi-volume-2467df3b-0812-4ab4-97d2-72b07977d7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040197623s Jul 20 01:47:49.179: INFO: Pod "downwardapi-volume-2467df3b-0812-4ab4-97d2-72b07977d7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064110704s Jul 20 01:47:51.184: INFO: Pod "downwardapi-volume-2467df3b-0812-4ab4-97d2-72b07977d7a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068455018s STEP: Saw pod success Jul 20 01:47:51.184: INFO: Pod "downwardapi-volume-2467df3b-0812-4ab4-97d2-72b07977d7a8" satisfied condition "Succeeded or Failed" Jul 20 01:47:51.187: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2467df3b-0812-4ab4-97d2-72b07977d7a8 container client-container: STEP: delete the pod Jul 20 01:47:51.211: INFO: Waiting for pod downwardapi-volume-2467df3b-0812-4ab4-97d2-72b07977d7a8 to disappear Jul 20 01:47:51.234: INFO: Pod downwardapi-volume-2467df3b-0812-4ab4-97d2-72b07977d7a8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:47:51.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1828" for this suite. • [SLOW TEST:6.273 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":15,"skipped":300,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:47:51.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-2953d321-d07b-4cb2-a2ec-d8987c4e7327 STEP: Creating a pod to test consume configMaps Jul 20 01:47:51.546: INFO: Waiting up to 5m0s for pod "pod-configmaps-088c9ad6-e258-49ec-8b65-3dc927e09602" in namespace "configmap-2892" to be "Succeeded or Failed" Jul 20 01:47:51.637: INFO: Pod "pod-configmaps-088c9ad6-e258-49ec-8b65-3dc927e09602": Phase="Pending", Reason="", readiness=false. Elapsed: 90.851771ms Jul 20 01:47:53.640: INFO: Pod "pod-configmaps-088c9ad6-e258-49ec-8b65-3dc927e09602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094053358s Jul 20 01:47:55.644: INFO: Pod "pod-configmaps-088c9ad6-e258-49ec-8b65-3dc927e09602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09748996s STEP: Saw pod success Jul 20 01:47:55.644: INFO: Pod "pod-configmaps-088c9ad6-e258-49ec-8b65-3dc927e09602" satisfied condition "Succeeded or Failed" Jul 20 01:47:55.646: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-088c9ad6-e258-49ec-8b65-3dc927e09602 container configmap-volume-test: STEP: delete the pod Jul 20 01:47:55.798: INFO: Waiting for pod pod-configmaps-088c9ad6-e258-49ec-8b65-3dc927e09602 to disappear Jul 20 01:47:55.809: INFO: Pod pod-configmaps-088c9ad6-e258-49ec-8b65-3dc927e09602 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:47:55.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2892" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":16,"skipped":310,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:47:55.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-72c84de7-4a5b-4af7-bd63-1f242c4c2628 STEP: Creating configMap with name cm-test-opt-upd-6ff652e8-f735-40c8-ad37-0f70cc695a57 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-72c84de7-4a5b-4af7-bd63-1f242c4c2628 STEP: Updating configmap cm-test-opt-upd-6ff652e8-f735-40c8-ad37-0f70cc695a57 STEP: Creating configMap with name cm-test-opt-create-c226257f-65f1-4d0c-a391-cf0fb9e303cc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:48:04.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5666" for this suite. • [SLOW TEST:8.274 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":17,"skipped":320,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:48:04.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-2dbcdcb4-a82d-4b3c-9c72-5317aaa15f20 STEP: Creating a pod to test consume configMaps Jul 20 01:48:04.691: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea0f871a-5427-46cb-8261-45f17137f8df" in namespace "projected-109" to be "Succeeded or Failed" Jul 20 01:48:04.695: INFO: Pod "pod-projected-configmaps-ea0f871a-5427-46cb-8261-45f17137f8df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35448ms Jul 20 01:48:06.755: INFO: Pod "pod-projected-configmaps-ea0f871a-5427-46cb-8261-45f17137f8df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063455824s Jul 20 01:48:08.759: INFO: Pod "pod-projected-configmaps-ea0f871a-5427-46cb-8261-45f17137f8df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067532493s STEP: Saw pod success Jul 20 01:48:08.759: INFO: Pod "pod-projected-configmaps-ea0f871a-5427-46cb-8261-45f17137f8df" satisfied condition "Succeeded or Failed" Jul 20 01:48:08.761: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-ea0f871a-5427-46cb-8261-45f17137f8df container projected-configmap-volume-test: STEP: delete the pod Jul 20 01:48:08.806: INFO: Waiting for pod pod-projected-configmaps-ea0f871a-5427-46cb-8261-45f17137f8df to disappear Jul 20 01:48:08.833: INFO: Pod pod-projected-configmaps-ea0f871a-5427-46cb-8261-45f17137f8df no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:48:08.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-109" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":18,"skipped":330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:48:08.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 20 01:48:13.507: INFO: Successfully updated pod "pod-update-8bfa949f-5024-40cf-b0d8-42d771f37c78" STEP: verifying the updated pod is in kubernetes Jul 20 01:48:13.596: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:48:13.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2265" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":294,"completed":19,"skipped":420,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:48:13.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-6159 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6159 to expose endpoints map[] Jul 20 01:48:13.734: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Jul 20 01:48:14.748: INFO: successfully validated that service endpoint-test2 in namespace services-6159 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6159 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6159 to expose endpoints map[pod1:[80]] Jul 20 01:48:18.814: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]], will retry Jul 20 01:48:19.826: INFO: successfully validated that service endpoint-test2 in namespace services-6159 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-6159 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6159 to expose endpoints map[pod1:[80] pod2:[80]] Jul 20 01:48:24.082: INFO: Unexpected endpoints: found map[4b3d6920-0425-4f8d-baf7-f5c07fafe124:[80]], expected map[pod1:[80] pod2:[80]], will retry Jul 20 01:48:25.083: INFO: successfully validated that service endpoint-test2 in namespace services-6159 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-6159 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6159 to expose endpoints map[pod2:[80]] Jul 20 01:48:25.112: INFO: successfully validated that service endpoint-test2 in namespace services-6159 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-6159 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6159 to expose endpoints map[] Jul 20 01:48:25.150: INFO: successfully validated that service endpoint-test2 in namespace services-6159 exposes endpoints map[] [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:48:25.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6159" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:11.966 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":294,"completed":20,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:48:25.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0720 01:49:06.790803 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 20 01:50:08.807: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jul 20 01:50:08.807: INFO: Deleting pod "simpletest.rc-5zlcq" in namespace "gc-1794" Jul 20 01:50:08.893: INFO: Deleting pod "simpletest.rc-9jtk7" in namespace "gc-1794" Jul 20 01:50:08.937: INFO: Deleting pod "simpletest.rc-9pkk6" in namespace "gc-1794" Jul 20 01:50:09.374: INFO: Deleting pod "simpletest.rc-9t2xx" in namespace "gc-1794" Jul 20 01:50:09.584: INFO: Deleting pod "simpletest.rc-dm5qt" in namespace "gc-1794" Jul 20 01:50:09.811: INFO: Deleting pod "simpletest.rc-f5xs9" in namespace "gc-1794" Jul 20 01:50:10.321: INFO: Deleting pod "simpletest.rc-hzj6l" in namespace "gc-1794" Jul 20 01:50:10.370: INFO: Deleting pod "simpletest.rc-p26gm" in namespace "gc-1794" Jul 20 01:50:10.799: INFO: Deleting pod "simpletest.rc-v8j5x" in namespace "gc-1794" Jul 20 01:50:11.141: INFO: Deleting pod "simpletest.rc-whnm9" in namespace "gc-1794" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:50:11.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1794" for this suite. • [SLOW TEST:106.169 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":294,"completed":21,"skipped":454,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:50:11.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:50:12.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5618" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":294,"completed":22,"skipped":465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:50:12.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8191/configmap-test-da177b88-9217-4d31-bc6a-96eba556f2c7 STEP: Creating a pod to test consume configMaps Jul 20 01:50:12.811: INFO: Waiting up to 5m0s for pod "pod-configmaps-56d015f9-df57-47f1-9649-8b924c26c2ba" in namespace "configmap-8191" to be "Succeeded or Failed" Jul 20 01:50:13.014: INFO: Pod "pod-configmaps-56d015f9-df57-47f1-9649-8b924c26c2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 202.98574ms Jul 20 01:50:15.440: INFO: Pod "pod-configmaps-56d015f9-df57-47f1-9649-8b924c26c2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.629001656s Jul 20 01:50:17.482: INFO: Pod "pod-configmaps-56d015f9-df57-47f1-9649-8b924c26c2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.670363193s Jul 20 01:50:19.565: INFO: Pod "pod-configmaps-56d015f9-df57-47f1-9649-8b924c26c2ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.753908798s STEP: Saw pod success Jul 20 01:50:19.565: INFO: Pod "pod-configmaps-56d015f9-df57-47f1-9649-8b924c26c2ba" satisfied condition "Succeeded or Failed" Jul 20 01:50:19.614: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-56d015f9-df57-47f1-9649-8b924c26c2ba container env-test: STEP: delete the pod Jul 20 01:50:19.840: INFO: Waiting for pod pod-configmaps-56d015f9-df57-47f1-9649-8b924c26c2ba to disappear Jul 20 01:50:19.906: INFO: Pod pod-configmaps-56d015f9-df57-47f1-9649-8b924c26c2ba no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:50:19.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8191" for this suite. • [SLOW TEST:7.635 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":294,"completed":23,"skipped":488,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:50:19.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ef3d6c77-071f-488e-80c3-99663c1ca12f STEP: Creating a pod to test consume configMaps Jul 20 01:50:20.183: INFO: Waiting up to 5m0s for pod "pod-configmaps-20e8777e-63d8-4cfb-ad29-b122ee7b1148" in namespace "configmap-6030" to be "Succeeded or Failed" Jul 20 01:50:20.236: INFO: Pod "pod-configmaps-20e8777e-63d8-4cfb-ad29-b122ee7b1148": Phase="Pending", Reason="", readiness=false. Elapsed: 52.743365ms Jul 20 01:50:22.240: INFO: Pod "pod-configmaps-20e8777e-63d8-4cfb-ad29-b122ee7b1148": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056785768s Jul 20 01:50:24.266: INFO: Pod "pod-configmaps-20e8777e-63d8-4cfb-ad29-b122ee7b1148": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083020007s STEP: Saw pod success Jul 20 01:50:24.266: INFO: Pod "pod-configmaps-20e8777e-63d8-4cfb-ad29-b122ee7b1148" satisfied condition "Succeeded or Failed" Jul 20 01:50:24.270: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-20e8777e-63d8-4cfb-ad29-b122ee7b1148 container configmap-volume-test: STEP: delete the pod Jul 20 01:50:24.315: INFO: Waiting for pod pod-configmaps-20e8777e-63d8-4cfb-ad29-b122ee7b1148 to disappear Jul 20 01:50:24.326: INFO: Pod pod-configmaps-20e8777e-63d8-4cfb-ad29-b122ee7b1148 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:50:24.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6030" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":24,"skipped":495,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:50:24.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7756 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7756 I0720 01:50:24.777823 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7756, replica count: 2 I0720 01:50:27.828260 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 01:50:30.828539 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 01:50:30.828: INFO: Creating new exec pod Jul 20 01:50:35.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7756 execpodznp8l -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jul 20 01:50:36.146: INFO: stderr: "I0720 01:50:36.040930 333 log.go:181] (0xc00079ebb0) (0xc0009e0dc0) Create stream\nI0720 01:50:36.040992 333 log.go:181] (0xc00079ebb0) (0xc0009e0dc0) Stream added, broadcasting: 1\nI0720 01:50:36.042942 333 log.go:181] (0xc00079ebb0) Reply frame received for 1\nI0720 01:50:36.042983 333 log.go:181] (0xc00079ebb0) (0xc000830460) Create stream\nI0720 01:50:36.042994 333 log.go:181] (0xc00079ebb0) (0xc000830460) Stream added, broadcasting: 3\nI0720 01:50:36.043856 333 log.go:181] (0xc00079ebb0) Reply frame received for 3\nI0720 01:50:36.043899 333 log.go:181] (0xc00079ebb0) (0xc0006ea6e0) Create stream\nI0720 01:50:36.043913 333 log.go:181] (0xc00079ebb0) (0xc0006ea6e0) Stream added, broadcasting: 5\nI0720 01:50:36.044887 333 log.go:181] (0xc00079ebb0) Reply frame received for 5\nI0720 01:50:36.138067 333 log.go:181] (0xc00079ebb0) Data frame received for 3\nI0720 01:50:36.138128 333 log.go:181] (0xc000830460) (3) Data frame handling\nI0720 01:50:36.138165 333 log.go:181] (0xc00079ebb0) Data frame received for 5\nI0720 01:50:36.138183 333 log.go:181] (0xc0006ea6e0) (5) Data frame handling\nI0720 01:50:36.138212 333 log.go:181] (0xc0006ea6e0) (5) Data frame sent\nI0720 01:50:36.138238 333 log.go:181] (0xc00079ebb0) Data frame received for 5\nI0720 01:50:36.138255 333 log.go:181] (0xc0006ea6e0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0720 01:50:36.140224 333 log.go:181] (0xc00079ebb0) Data frame received for 1\nI0720 01:50:36.140249 333 log.go:181] (0xc0009e0dc0) (1) Data frame handling\nI0720 01:50:36.140264 333 log.go:181] (0xc0009e0dc0) (1) Data frame sent\nI0720 01:50:36.140282 333 log.go:181] (0xc00079ebb0) (0xc0009e0dc0) Stream removed, broadcasting: 1\nI0720 01:50:36.140346 333 log.go:181] (0xc00079ebb0) Go away received\nI0720 01:50:36.140713 333 log.go:181] (0xc00079ebb0) (0xc0009e0dc0) Stream removed, broadcasting: 1\nI0720 01:50:36.140822 333 log.go:181] (0xc00079ebb0) (0xc000830460) Stream removed, broadcasting: 3\nI0720 01:50:36.140840 333 log.go:181] (0xc00079ebb0) (0xc0006ea6e0) Stream removed, broadcasting: 5\n" Jul 20 01:50:36.146: INFO: stdout: "" Jul 20 01:50:36.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7756 execpodznp8l -- /bin/sh -x -c nc -zv -t -w 2 10.108.42.202 80' Jul 20 01:50:36.334: INFO: stderr: "I0720 01:50:36.268437 351 log.go:181] (0xc000caed10) (0xc000db4280) Create stream\nI0720 01:50:36.268515 351 log.go:181] (0xc000caed10) (0xc000db4280) Stream added, broadcasting: 1\nI0720 01:50:36.274026 351 log.go:181] (0xc000caed10) Reply frame received for 1\nI0720 01:50:36.274061 351 log.go:181] (0xc000caed10) (0xc000ac1180) Create stream\nI0720 01:50:36.274068 351 log.go:181] (0xc000caed10) (0xc000ac1180) Stream added, broadcasting: 3\nI0720 01:50:36.275091 351 log.go:181] (0xc000caed10) Reply frame received for 3\nI0720 01:50:36.275111 351 log.go:181] (0xc000caed10) (0xc000982460) Create stream\nI0720 01:50:36.275119 351 log.go:181] (0xc000caed10) (0xc000982460) Stream added, broadcasting: 5\nI0720 01:50:36.276044 351 log.go:181] (0xc000caed10) Reply frame received for 5\nI0720 01:50:36.324479 351 log.go:181] (0xc000caed10) Data frame received for 5\nI0720 01:50:36.324517 351 log.go:181] (0xc000982460) (5) Data frame handling\nI0720 01:50:36.324551 351 log.go:181] (0xc000982460) (5) Data frame sent\nI0720 01:50:36.324571 351 log.go:181] (0xc000caed10) Data frame received for 5\nI0720 01:50:36.324584 351 log.go:181] (0xc000982460) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.42.202 80\nConnection to 10.108.42.202 80 port [tcp/http] succeeded!\nI0720 01:50:36.324615 351 log.go:181] (0xc000caed10) Data frame received for 3\nI0720 01:50:36.324626 351 log.go:181] (0xc000ac1180) (3) Data frame handling\nI0720 01:50:36.327198 351 log.go:181] (0xc000caed10) Data frame received for 1\nI0720 01:50:36.327223 351 log.go:181] (0xc000db4280) (1) Data frame handling\nI0720 01:50:36.327234 351 log.go:181] (0xc000db4280) (1) Data frame sent\nI0720 01:50:36.327254 351 log.go:181] (0xc000caed10) (0xc000db4280) Stream removed, broadcasting: 1\nI0720 01:50:36.327362 351 log.go:181] (0xc000caed10) Go away received\nI0720 01:50:36.327665 351 log.go:181] (0xc000caed10) (0xc000db4280) Stream removed, broadcasting: 1\nI0720 01:50:36.327686 351 log.go:181] (0xc000caed10) (0xc000ac1180) Stream removed, broadcasting: 3\nI0720 01:50:36.327694 351 log.go:181] (0xc000caed10) (0xc000982460) Stream removed, broadcasting: 5\n" Jul 20 01:50:36.334: INFO: stdout: "" Jul 20 01:50:36.334: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:50:36.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7756" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:12.030 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":294,"completed":25,"skipped":498,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:50:36.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-213c7c68-f552-4d3f-88f8-ba8060a4ca39 STEP: Creating a pod to test consume secrets Jul 20 01:50:36.509: INFO: Waiting up to 5m0s for pod "pod-secrets-83e6fdac-b252-4b3b-9a25-eb387dcf5373" in namespace "secrets-5551" to be "Succeeded or Failed" Jul 20 01:50:36.527: INFO: Pod "pod-secrets-83e6fdac-b252-4b3b-9a25-eb387dcf5373": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043868ms Jul 20 01:50:38.553: INFO: Pod "pod-secrets-83e6fdac-b252-4b3b-9a25-eb387dcf5373": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044369009s Jul 20 01:50:40.557: INFO: Pod "pod-secrets-83e6fdac-b252-4b3b-9a25-eb387dcf5373": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04840364s STEP: Saw pod success Jul 20 01:50:40.557: INFO: Pod "pod-secrets-83e6fdac-b252-4b3b-9a25-eb387dcf5373" satisfied condition "Succeeded or Failed" Jul 20 01:50:40.560: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-83e6fdac-b252-4b3b-9a25-eb387dcf5373 container secret-volume-test: STEP: delete the pod Jul 20 01:50:40.596: INFO: Waiting for pod pod-secrets-83e6fdac-b252-4b3b-9a25-eb387dcf5373 to disappear Jul 20 01:50:40.614: INFO: Pod pod-secrets-83e6fdac-b252-4b3b-9a25-eb387dcf5373 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:50:40.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5551" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":26,"skipped":501,"failed":0} SSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:50:40.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:50:40.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-1447" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":294,"completed":27,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:50:40.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 20 01:50:41.052: INFO: Waiting up to 5m0s for pod "pod-89fac8b9-2e98-47a0-9881-55611585ad98" in namespace "emptydir-8782" to be "Succeeded or Failed" Jul 20 01:50:41.057: INFO: Pod "pod-89fac8b9-2e98-47a0-9881-55611585ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 5.068032ms Jul 20 01:50:43.086: INFO: Pod "pod-89fac8b9-2e98-47a0-9881-55611585ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033640944s Jul 20 01:50:45.090: INFO: Pod "pod-89fac8b9-2e98-47a0-9881-55611585ad98": Phase="Running", Reason="", readiness=true. Elapsed: 4.038013248s Jul 20 01:50:47.094: INFO: Pod "pod-89fac8b9-2e98-47a0-9881-55611585ad98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041496428s STEP: Saw pod success Jul 20 01:50:47.094: INFO: Pod "pod-89fac8b9-2e98-47a0-9881-55611585ad98" satisfied condition "Succeeded or Failed" Jul 20 01:50:47.096: INFO: Trying to get logs from node latest-worker2 pod pod-89fac8b9-2e98-47a0-9881-55611585ad98 container test-container: STEP: delete the pod Jul 20 01:50:47.119: INFO: Waiting for pod pod-89fac8b9-2e98-47a0-9881-55611585ad98 to disappear Jul 20 01:50:47.159: INFO: Pod pod-89fac8b9-2e98-47a0-9881-55611585ad98 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:50:47.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8782" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":28,"skipped":538,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:50:47.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-df90a932-b356-479e-9c91-d52b90533192 STEP: Creating a pod to test consume configMaps Jul 20 01:50:47.258: INFO: Waiting up to 5m0s for pod "pod-configmaps-06a85c51-cf64-42ff-be82-e7f974dd8e84" in namespace "configmap-6732" to be "Succeeded or Failed" Jul 20 01:50:47.267: INFO: Pod "pod-configmaps-06a85c51-cf64-42ff-be82-e7f974dd8e84": Phase="Pending", Reason="", readiness=false. Elapsed: 9.566308ms Jul 20 01:50:49.332: INFO: Pod "pod-configmaps-06a85c51-cf64-42ff-be82-e7f974dd8e84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074078497s Jul 20 01:50:51.336: INFO: Pod "pod-configmaps-06a85c51-cf64-42ff-be82-e7f974dd8e84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078717666s Jul 20 01:50:53.346: INFO: Pod "pod-configmaps-06a85c51-cf64-42ff-be82-e7f974dd8e84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088824318s STEP: Saw pod success Jul 20 01:50:53.346: INFO: Pod "pod-configmaps-06a85c51-cf64-42ff-be82-e7f974dd8e84" satisfied condition "Succeeded or Failed" Jul 20 01:50:53.349: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-06a85c51-cf64-42ff-be82-e7f974dd8e84 container configmap-volume-test: STEP: delete the pod Jul 20 01:50:53.699: INFO: Waiting for pod pod-configmaps-06a85c51-cf64-42ff-be82-e7f974dd8e84 to disappear Jul 20 01:50:53.777: INFO: Pod pod-configmaps-06a85c51-cf64-42ff-be82-e7f974dd8e84 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:50:53.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6732" for this suite. • [SLOW TEST:6.618 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":29,"skipped":549,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:50:53.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 01:50:54.774: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jul 20 01:50:56.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806654, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806654, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806654, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730806654, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 01:50:59.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 01:50:59.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:51:01.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3611" for this suite. STEP: Destroying namespace "webhook-3611-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.950 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":294,"completed":30,"skipped":565,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:51:01.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Jul 20 01:51:02.032: INFO: Waiting up to 1m0s for all nodes to be ready Jul 20 01:52:02.076: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Jul 20 01:52:02.119: INFO: Created pod: pod0-sched-preemption-low-priority Jul 20 01:52:02.151: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:52:28.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7123" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:86.637 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":294,"completed":31,"skipped":572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:52:28.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:52:44.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3221" for this suite. • [SLOW TEST:16.120 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":294,"completed":32,"skipped":611,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:52:44.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Jul 20 01:52:44.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config cluster-info' Jul 20 01:52:44.659: INFO: stderr: "" Jul 20 01:52:44.659: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:42901\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:42901/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:52:44.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-460" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":294,"completed":33,"skipped":630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:52:44.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-7268212a-0872-42a3-b9a6-7073e59c8bf3 STEP: Creating a pod to test consume configMaps Jul 20 01:52:44.744: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8a7480b2-c3bb-4b76-bedc-4199a8c9a4f3" in namespace "projected-4165" to be "Succeeded or Failed" Jul 20 01:52:44.771: INFO: Pod "pod-projected-configmaps-8a7480b2-c3bb-4b76-bedc-4199a8c9a4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 26.34704ms Jul 20 01:52:46.835: INFO: Pod "pod-projected-configmaps-8a7480b2-c3bb-4b76-bedc-4199a8c9a4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090652251s Jul 20 01:52:48.903: INFO: Pod "pod-projected-configmaps-8a7480b2-c3bb-4b76-bedc-4199a8c9a4f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158705686s STEP: Saw pod success Jul 20 01:52:48.903: INFO: Pod "pod-projected-configmaps-8a7480b2-c3bb-4b76-bedc-4199a8c9a4f3" satisfied condition "Succeeded or Failed" Jul 20 01:52:48.906: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-8a7480b2-c3bb-4b76-bedc-4199a8c9a4f3 container projected-configmap-volume-test: STEP: delete the pod Jul 20 01:52:49.266: INFO: Waiting for pod pod-projected-configmaps-8a7480b2-c3bb-4b76-bedc-4199a8c9a4f3 to disappear Jul 20 01:52:49.423: INFO: Pod pod-projected-configmaps-8a7480b2-c3bb-4b76-bedc-4199a8c9a4f3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:52:49.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4165" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":34,"skipped":663,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:52:49.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 20 01:52:49.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7452' Jul 20 01:52:49.636: INFO: stderr: "" Jul 20 01:52:49.636: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 Jul 20 01:52:49.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7452' Jul 20 01:52:52.802: INFO: stderr: "" Jul 20 01:52:52.802: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:52:52.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7452" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":294,"completed":35,"skipped":668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:52:52.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 01:52:52.859: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7f8d96c-90a2-495c-9d0c-cefc4d7e9e28" in namespace "downward-api-8118" to be "Succeeded or Failed" Jul 20 01:52:52.873: INFO: Pod "downwardapi-volume-f7f8d96c-90a2-495c-9d0c-cefc4d7e9e28": Phase="Pending", Reason="", readiness=false. Elapsed: 13.408349ms Jul 20 01:52:54.945: INFO: Pod "downwardapi-volume-f7f8d96c-90a2-495c-9d0c-cefc4d7e9e28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085302366s Jul 20 01:52:56.948: INFO: Pod "downwardapi-volume-f7f8d96c-90a2-495c-9d0c-cefc4d7e9e28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089167758s STEP: Saw pod success Jul 20 01:52:56.949: INFO: Pod "downwardapi-volume-f7f8d96c-90a2-495c-9d0c-cefc4d7e9e28" satisfied condition "Succeeded or Failed" Jul 20 01:52:56.951: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f7f8d96c-90a2-495c-9d0c-cefc4d7e9e28 container client-container: STEP: delete the pod Jul 20 01:52:56.968: INFO: Waiting for pod downwardapi-volume-f7f8d96c-90a2-495c-9d0c-cefc4d7e9e28 to disappear Jul 20 01:52:57.169: INFO: Pod downwardapi-volume-f7f8d96c-90a2-495c-9d0c-cefc4d7e9e28 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:52:57.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8118" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":36,"skipped":705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:52:57.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 20 01:52:57.341: INFO: Waiting up to 5m0s for pod "downward-api-6cf1cb8d-dc0c-4d12-b6ea-0c59b1828835" in namespace "downward-api-3342" to be "Succeeded or Failed" Jul 20 01:52:57.354: INFO: Pod "downward-api-6cf1cb8d-dc0c-4d12-b6ea-0c59b1828835": Phase="Pending", Reason="", readiness=false. Elapsed: 12.974199ms Jul 20 01:52:59.358: INFO: Pod "downward-api-6cf1cb8d-dc0c-4d12-b6ea-0c59b1828835": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016502113s Jul 20 01:53:01.361: INFO: Pod "downward-api-6cf1cb8d-dc0c-4d12-b6ea-0c59b1828835": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019844908s STEP: Saw pod success Jul 20 01:53:01.361: INFO: Pod "downward-api-6cf1cb8d-dc0c-4d12-b6ea-0c59b1828835" satisfied condition "Succeeded or Failed" Jul 20 01:53:01.364: INFO: Trying to get logs from node latest-worker2 pod downward-api-6cf1cb8d-dc0c-4d12-b6ea-0c59b1828835 container dapi-container: STEP: delete the pod Jul 20 01:53:01.418: INFO: Waiting for pod downward-api-6cf1cb8d-dc0c-4d12-b6ea-0c59b1828835 to disappear Jul 20 01:53:01.495: INFO: Pod downward-api-6cf1cb8d-dc0c-4d12-b6ea-0c59b1828835 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:53:01.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3342" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":294,"completed":37,"skipped":736,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:53:01.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:53:12.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6089" for this suite. • [SLOW TEST:11.240 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":294,"completed":38,"skipped":755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:53:12.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Jul 20 01:53:13.049: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix133058281/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:53:13.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8180" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":294,"completed":39,"skipped":805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:53:13.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5598 STEP: creating service affinity-clusterip-transition in namespace services-5598 STEP: creating replication controller affinity-clusterip-transition in namespace services-5598 I0720 01:53:13.374342 8 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-5598, replica count: 3 I0720 01:53:16.424846 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 01:53:19.425090 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 01:53:22.425366 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 01:53:22.431: INFO: Creating new exec pod Jul 20 01:53:27.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5598 execpod-affinityjm4bd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jul 20 01:53:27.904: INFO: stderr: "I0720 01:53:27.810559 438 log.go:181] (0xc000e0a370) (0xc00059d900) Create stream\nI0720 01:53:27.810662 438 log.go:181] (0xc000e0a370) (0xc00059d900) Stream added, broadcasting: 1\nI0720 01:53:27.812408 438 log.go:181] (0xc000e0a370) Reply frame received for 1\nI0720 01:53:27.812452 438 log.go:181] (0xc000e0a370) (0xc0003c3720) Create stream\nI0720 01:53:27.812464 438 log.go:181] (0xc000e0a370) (0xc0003c3720) Stream added, broadcasting: 3\nI0720 01:53:27.813488 438 log.go:181] (0xc000e0a370) Reply frame received for 3\nI0720 01:53:27.813526 438 log.go:181] (0xc000e0a370) (0xc0004bf2c0) Create stream\nI0720 01:53:27.813544 438 log.go:181] (0xc000e0a370) (0xc0004bf2c0) Stream added, broadcasting: 5\nI0720 01:53:27.814433 438 log.go:181] (0xc000e0a370) Reply frame received for 5\nI0720 01:53:27.895473 438 log.go:181] (0xc000e0a370) Data frame received for 5\nI0720 01:53:27.895503 438 log.go:181] (0xc0004bf2c0) (5) Data frame handling\nI0720 01:53:27.895519 438 log.go:181] (0xc0004bf2c0) (5) Data frame sent\nI0720 01:53:27.895526 438 log.go:181] (0xc000e0a370) Data frame received for 5\nI0720 01:53:27.895532 438 log.go:181] (0xc0004bf2c0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0720 01:53:27.895606 438 log.go:181] (0xc0004bf2c0) (5) Data frame sent\nI0720 01:53:27.895706 438 log.go:181] (0xc000e0a370) Data frame received for 5\nI0720 01:53:27.895730 438 log.go:181] (0xc0004bf2c0) (5) Data frame handling\nI0720 01:53:27.896192 438 log.go:181] (0xc000e0a370) Data frame received for 3\nI0720 01:53:27.896234 438 log.go:181] (0xc0003c3720) (3) Data frame handling\nI0720 01:53:27.899426 438 log.go:181] (0xc000e0a370) Data frame received for 1\nI0720 01:53:27.899467 438 log.go:181] (0xc00059d900) (1) Data frame handling\nI0720 01:53:27.899478 438 log.go:181] (0xc00059d900) (1) Data frame sent\nI0720 01:53:27.899492 438 log.go:181] (0xc000e0a370) (0xc00059d900) Stream removed, broadcasting: 1\nI0720 01:53:27.899520 438 log.go:181] (0xc000e0a370) Go away received\nI0720 01:53:27.899941 438 log.go:181] (0xc000e0a370) (0xc00059d900) Stream removed, broadcasting: 1\nI0720 01:53:27.899966 438 log.go:181] (0xc000e0a370) (0xc0003c3720) Stream removed, broadcasting: 3\nI0720 01:53:27.899978 438 log.go:181] (0xc000e0a370) (0xc0004bf2c0) Stream removed, broadcasting: 5\n" Jul 20 01:53:27.904: INFO: stdout: "" Jul 20 01:53:27.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5598 execpod-affinityjm4bd -- /bin/sh -x -c nc -zv -t -w 2 10.111.98.249 80' Jul 20 01:53:28.149: INFO: stderr: "I0720 01:53:28.045574 456 log.go:181] (0xc000bb9080) (0xc000ab7860) Create stream\nI0720 01:53:28.045639 456 log.go:181] (0xc000bb9080) (0xc000ab7860) Stream added, broadcasting: 1\nI0720 01:53:28.051064 456 log.go:181] (0xc000bb9080) Reply frame received for 1\nI0720 01:53:28.051097 456 log.go:181] (0xc000bb9080) (0xc000a86aa0) Create stream\nI0720 01:53:28.051105 456 log.go:181] (0xc000bb9080) (0xc000a86aa0) Stream added, broadcasting: 3\nI0720 01:53:28.052015 456 log.go:181] (0xc000bb9080) Reply frame received for 3\nI0720 01:53:28.052055 456 log.go:181] (0xc000bb9080) (0xc0001950e0) Create stream\nI0720 01:53:28.052080 456 log.go:181] (0xc000bb9080) (0xc0001950e0) Stream added, broadcasting: 5\nI0720 01:53:28.053185 456 log.go:181] (0xc000bb9080) Reply frame received for 5\nI0720 01:53:28.141792 456 log.go:181] (0xc000bb9080) Data frame received for 5\nI0720 01:53:28.141834 456 log.go:181] (0xc0001950e0) (5) Data frame handling\nI0720 01:53:28.141852 456 log.go:181] (0xc0001950e0) (5) Data frame sent\nI0720 01:53:28.141870 456 log.go:181] (0xc000bb9080) Data frame received for 5\nI0720 01:53:28.141886 456 log.go:181] (0xc0001950e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.98.249 80\nConnection to 10.111.98.249 80 port [tcp/http] succeeded!\nI0720 01:53:28.141918 456 log.go:181] (0xc000bb9080) Data frame received for 3\nI0720 01:53:28.141951 456 log.go:181] (0xc000a86aa0) (3) Data frame handling\nI0720 01:53:28.143251 456 log.go:181] (0xc000bb9080) Data frame received for 1\nI0720 01:53:28.143273 456 log.go:181] (0xc000ab7860) (1) Data frame handling\nI0720 01:53:28.143288 456 log.go:181] (0xc000ab7860) (1) Data frame sent\nI0720 01:53:28.143308 456 log.go:181] (0xc000bb9080) (0xc000ab7860) Stream removed, broadcasting: 1\nI0720 01:53:28.143328 456 log.go:181] (0xc000bb9080) Go away received\nI0720 01:53:28.143877 456 log.go:181] (0xc000bb9080) (0xc000ab7860) Stream removed, broadcasting: 1\nI0720 01:53:28.143901 456 log.go:181] (0xc000bb9080) (0xc000a86aa0) Stream removed, broadcasting: 3\nI0720 01:53:28.143912 456 log.go:181] (0xc000bb9080) (0xc0001950e0) Stream removed, broadcasting: 5\n" Jul 20 01:53:28.149: INFO: stdout: "" Jul 20 01:53:28.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5598 execpod-affinityjm4bd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.98.249:80/ ; done' Jul 20 01:53:28.545: INFO: stderr: "I0720 01:53:28.371015 474 log.go:181] (0xc000c83290) (0xc000937720) Create stream\nI0720 01:53:28.371096 474 log.go:181] (0xc000c83290) (0xc000937720) Stream added, broadcasting: 1\nI0720 01:53:28.374549 474 log.go:181] (0xc000c83290) Reply frame received for 1\nI0720 01:53:28.374599 474 log.go:181] (0xc000c83290) (0xc0009377c0) Create stream\nI0720 01:53:28.374611 474 log.go:181] (0xc000c83290) (0xc0009377c0) Stream added, broadcasting: 3\nI0720 01:53:28.375712 474 log.go:181] (0xc000c83290) Reply frame received for 3\nI0720 01:53:28.375745 474 log.go:181] (0xc000c83290) (0xc000937860) Create stream\nI0720 01:53:28.375756 474 log.go:181] (0xc000c83290) (0xc000937860) Stream added, broadcasting: 5\nI0720 01:53:28.376842 474 log.go:181] (0xc000c83290) Reply frame received for 5\nI0720 01:53:28.436419 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.436456 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.436471 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.436498 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.436525 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.436547 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.441481 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.441496 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.441505 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.442058 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.442070 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.442090 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.442102 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.442112 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.442125 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.449621 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.449636 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.449644 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.450233 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.450246 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.450253 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.450282 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.450304 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.450327 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.454468 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.454489 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.454498 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.454999 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.455020 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.455039 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.455085 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.455107 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.455125 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.462004 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.462038 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.462064 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.462817 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.462845 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.462860 474 log.go:181] (0xc000937860) (5) Data frame sent\nI0720 01:53:28.462875 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.462890 474 log.go:181] (0xc000937860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.462919 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.462958 474 log.go:181] (0xc000937860) (5) Data frame sent\nI0720 01:53:28.463003 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.463031 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.467638 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.467658 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.467684 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.468464 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.468478 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.468487 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.468651 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.468670 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.468683 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.474091 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.474111 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.474126 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.474567 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.474582 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.474590 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.474598 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.474638 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.474663 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.479953 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.479972 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.479985 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.480410 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.480424 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.480440 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.480457 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.480475 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.480502 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.487683 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.487706 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.487737 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.488183 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.488207 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.488235 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.488246 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.488263 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.488273 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.492326 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.492347 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.492366 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.493219 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.493269 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.493284 474 log.go:181] (0xc000937860) (5) Data frame sent\nI0720 01:53:28.493295 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.493305 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.493322 474 log.go:181] (0xc000c83290) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.493331 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.493343 474 log.go:181] (0xc000937860) (5) Data frame sent\nI0720 01:53:28.493355 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.498645 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.498663 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.498671 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.499084 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.499100 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.499109 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.499131 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.499149 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.499157 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.505528 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.505545 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.505552 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.506251 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.506274 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.506291 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.506305 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.506316 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.506329 474 log.go:181] (0xc000937860) (5) Data frame sent\nI0720 01:53:28.506347 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.506363 474 log.go:181] (0xc000937860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.506384 474 log.go:181] (0xc000937860) (5) Data frame sent\nI0720 01:53:28.512243 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.512274 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.512291 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.512853 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.512878 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.512893 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.512923 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.512950 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.512978 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.518334 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.518353 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.518369 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.518826 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.518857 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.518878 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.518906 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.518928 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.518945 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.524063 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.524110 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.524133 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.524934 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.524947 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.524956 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.524968 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.524983 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.525003 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.531114 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.531140 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.531165 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.531731 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.531749 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.531761 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.531797 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.531818 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.531831 474 log.go:181] (0xc000937860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.537409 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.537426 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.537435 474 log.go:181] (0xc0009377c0) (3) Data frame sent\nI0720 01:53:28.538314 474 log.go:181] (0xc000c83290) Data frame received for 3\nI0720 01:53:28.538333 474 log.go:181] (0xc0009377c0) (3) Data frame handling\nI0720 01:53:28.538418 474 log.go:181] (0xc000c83290) Data frame received for 5\nI0720 01:53:28.538442 474 log.go:181] (0xc000937860) (5) Data frame handling\nI0720 01:53:28.540276 474 log.go:181] (0xc000c83290) Data frame received for 1\nI0720 01:53:28.540308 474 log.go:181] (0xc000937720) (1) Data frame handling\nI0720 01:53:28.540336 474 log.go:181] (0xc000937720) (1) Data frame sent\nI0720 01:53:28.540366 474 log.go:181] (0xc000c83290) (0xc000937720) Stream removed, broadcasting: 1\nI0720 01:53:28.540409 474 log.go:181] (0xc000c83290) Go away received\nI0720 01:53:28.540956 474 log.go:181] (0xc000c83290) (0xc000937720) Stream removed, broadcasting: 1\nI0720 01:53:28.540974 474 log.go:181] (0xc000c83290) (0xc0009377c0) Stream removed, broadcasting: 3\nI0720 01:53:28.540981 474 log.go:181] (0xc000c83290) (0xc000937860) Stream removed, broadcasting: 5\n" Jul 20 01:53:28.546: INFO: stdout: "\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-rn7rf\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-rn7rf\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-7xr7m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-7xr7m\naffinity-clusterip-transition-rn7rf\naffinity-clusterip-transition-7xr7m" Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-rn7rf Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-rn7rf Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-7xr7m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-7xr7m Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-rn7rf Jul 20 01:53:28.546: INFO: Received response from host: affinity-clusterip-transition-7xr7m Jul 20 01:53:28.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5598 execpod-affinityjm4bd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.98.249:80/ ; done' Jul 20 01:53:29.006: INFO: stderr: "I0720 01:53:28.840516 489 log.go:181] (0xc000aaf6b0) (0xc000aa6820) Create stream\nI0720 01:53:28.840590 489 log.go:181] (0xc000aaf6b0) (0xc000aa6820) Stream added, broadcasting: 1\nI0720 01:53:28.845354 489 log.go:181] (0xc000aaf6b0) Reply frame received for 1\nI0720 01:53:28.845415 489 log.go:181] (0xc000aaf6b0) (0xc000bf4320) Create stream\nI0720 01:53:28.845429 489 log.go:181] (0xc000aaf6b0) (0xc000bf4320) Stream added, broadcasting: 3\nI0720 01:53:28.846402 489 log.go:181] (0xc000aaf6b0) Reply frame received for 3\nI0720 01:53:28.846444 489 log.go:181] (0xc000aaf6b0) (0xc000892960) Create stream\nI0720 01:53:28.846466 489 log.go:181] (0xc000aaf6b0) (0xc000892960) Stream added, broadcasting: 5\nI0720 01:53:28.847565 489 log.go:181] (0xc000aaf6b0) Reply frame received for 5\nI0720 01:53:28.909962 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.910007 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.910028 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.910053 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.910066 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.910079 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.914123 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.914152 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.914177 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.914619 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.914641 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.914656 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.914672 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.914686 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.914693 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.920165 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.920197 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.920218 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.920970 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.920999 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.921011 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.921029 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.921045 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.921056 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.926365 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.926384 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.926393 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.926963 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.927000 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.927014 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.927040 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.927050 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.927061 489 log.go:181] (0xc000892960) (5) Data frame sent\nI0720 01:53:28.927071 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.927080 489 log.go:181] (0xc000892960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.927102 489 log.go:181] (0xc000892960) (5) Data frame sent\nI0720 01:53:28.931407 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.931437 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.931454 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.931951 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.931985 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.932001 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.932017 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.932026 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.932034 489 log.go:181] (0xc000892960) (5) Data frame sent\nI0720 01:53:28.932048 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.932070 489 log.go:181] (0xc000892960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.932094 489 log.go:181] (0xc000892960) (5) Data frame sent\nI0720 01:53:28.938043 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.938071 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.938091 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.938615 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.938627 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.938632 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.938646 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.938669 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.938684 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.942466 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.942487 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.942515 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.942897 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.942925 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.942967 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.942986 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.943003 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.943013 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.947838 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.947858 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.947876 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.948291 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.948312 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.948330 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.948411 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.948434 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.948452 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.953818 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.953843 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.953862 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.954440 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.954452 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.954459 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.954467 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.954472 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.954478 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.958265 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.958279 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.958288 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.958779 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.958806 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.958837 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.958861 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.958882 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.958904 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.966423 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.966450 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.966480 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.967041 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.967071 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.967088 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.967109 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.967119 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.967143 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.972218 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.972242 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.972252 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.972957 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.973007 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.973024 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.973040 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.973048 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.973057 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.977149 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.977160 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.977165 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.977543 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.977561 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.977570 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.977582 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.977590 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.977598 489 log.go:181] (0xc000892960) (5) Data frame sent\nI0720 01:53:28.977605 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.977612 489 log.go:181] (0xc000892960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.977662 489 log.go:181] (0xc000892960) (5) Data frame sent\nI0720 01:53:28.981676 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.981699 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.981716 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.982053 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.982070 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.982086 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.982122 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.982136 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.982143 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.987042 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.987065 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.987081 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.987830 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.987868 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.987881 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.987895 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.987907 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.987915 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.991254 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.991279 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.991297 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.991652 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.991678 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.991703 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.991723 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.991741 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:28.991765 489 log.go:181] (0xc000892960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.98.249:80/\nI0720 01:53:28.997463 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.997494 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.997528 489 log.go:181] (0xc000bf4320) (3) Data frame sent\nI0720 01:53:28.998142 489 log.go:181] (0xc000aaf6b0) Data frame received for 3\nI0720 01:53:28.998170 489 log.go:181] (0xc000bf4320) (3) Data frame handling\nI0720 01:53:28.998284 489 log.go:181] (0xc000aaf6b0) Data frame received for 5\nI0720 01:53:28.998313 489 log.go:181] (0xc000892960) (5) Data frame handling\nI0720 01:53:29.000496 489 log.go:181] (0xc000aaf6b0) Data frame received for 1\nI0720 01:53:29.000519 489 log.go:181] (0xc000aa6820) (1) Data frame handling\nI0720 01:53:29.000533 489 log.go:181] (0xc000aa6820) (1) Data frame sent\nI0720 01:53:29.000551 489 log.go:181] (0xc000aaf6b0) (0xc000aa6820) Stream removed, broadcasting: 1\nI0720 01:53:29.000887 489 log.go:181] (0xc000aaf6b0) Go away received\nI0720 01:53:29.000981 489 log.go:181] (0xc000aaf6b0) (0xc000aa6820) Stream removed, broadcasting: 1\nI0720 01:53:29.001000 489 log.go:181] (0xc000aaf6b0) (0xc000bf4320) Stream removed, broadcasting: 3\nI0720 01:53:29.001008 489 log.go:181] (0xc000aaf6b0) (0xc000892960) Stream removed, broadcasting: 5\n" Jul 20 01:53:29.006: INFO: stdout: "\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m\naffinity-clusterip-transition-2jt4m" Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Received response from host: affinity-clusterip-transition-2jt4m Jul 20 01:53:29.007: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-5598, will wait for the garbage collector to delete the pods Jul 20 01:53:29.357: INFO: Deleting ReplicationController affinity-clusterip-transition took: 64.856907ms Jul 20 01:53:29.758: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.218389ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:53:44.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5598" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:30.851 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":40,"skipped":836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:53:44.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-26557709-66fb-44ff-9f1d-199c6537bcf3 in namespace container-probe-149 Jul 20 01:53:48.196: INFO: Started pod liveness-26557709-66fb-44ff-9f1d-199c6537bcf3 in namespace container-probe-149 STEP: checking the pod's current state and verifying that restartCount is present Jul 20 01:53:48.199: INFO: Initial restart count of pod liveness-26557709-66fb-44ff-9f1d-199c6537bcf3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:57:49.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-149" for this suite. • [SLOW TEST:245.852 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":294,"completed":41,"skipped":890,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:57:49.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Jul 20 01:57:50.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config api-versions' Jul 20 01:57:50.430: INFO: stderr: "" Jul 20 01:57:50.430: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:57:50.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6352" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":294,"completed":42,"skipped":900,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:57:50.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0720 01:58:00.725266 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 20 01:59:02.744: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:59:02.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8188" for this suite. • [SLOW TEST:72.312 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":294,"completed":43,"skipped":913,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:59:02.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jul 20 01:59:03.126: INFO: namespace kubectl-4747 Jul 20 01:59:03.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4747' Jul 20 01:59:09.497: INFO: stderr: "" Jul 20 01:59:09.497: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jul 20 01:59:10.501: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 01:59:10.501: INFO: Found 0 / 1 Jul 20 01:59:11.516: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 01:59:11.517: INFO: Found 0 / 1 Jul 20 01:59:12.502: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 01:59:12.502: INFO: Found 0 / 1 Jul 20 01:59:13.502: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 01:59:13.502: INFO: Found 1 / 1 Jul 20 01:59:13.502: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 20 01:59:13.506: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 01:59:13.506: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 20 01:59:13.506: INFO: wait on agnhost-primary startup in kubectl-4747 Jul 20 01:59:13.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs agnhost-primary-rlzh7 agnhost-primary --namespace=kubectl-4747' Jul 20 01:59:13.644: INFO: stderr: "" Jul 20 01:59:13.644: INFO: stdout: "Paused\n" STEP: exposing RC Jul 20 01:59:13.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4747' Jul 20 01:59:13.806: INFO: stderr: "" Jul 20 01:59:13.806: INFO: stdout: "service/rm2 exposed\n" Jul 20 01:59:13.821: INFO: Service rm2 in namespace kubectl-4747 found. STEP: exposing service Jul 20 01:59:15.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4747' Jul 20 01:59:15.982: INFO: stderr: "" Jul 20 01:59:15.982: INFO: stdout: "service/rm3 exposed\n" Jul 20 01:59:16.014: INFO: Service rm3 in namespace kubectl-4747 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:59:18.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4747" for this suite. • [SLOW TEST:15.276 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1241 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":294,"completed":44,"skipped":944,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:59:18.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 20 01:59:18.114: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 01:59:18.125: INFO: Waiting for terminating namespaces to be deleted... Jul 20 01:59:18.128: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 20 01:59:18.134: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 01:59:18.134: INFO: Container coredns ready: true, restart count 0 Jul 20 01:59:18.134: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Jul 20 01:59:18.134: INFO: Container coredns ready: true, restart count 0 Jul 20 01:59:18.134: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 01:59:18.134: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 01:59:18.134: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 01:59:18.134: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 01:59:18.134: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 01:59:18.134: INFO: Container local-path-provisioner ready: true, restart count 0 Jul 20 01:59:18.134: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 20 01:59:18.139: INFO: rally-e861bbb2-3esf71v2-v76gf from c-rally-e861bbb2-u20yoz0n started at 2020-07-20 01:58:57 +0000 UTC (1 container statuses recorded) Jul 20 01:59:18.139: INFO: Container rally-e861bbb2-3esf71v2 ready: false, restart count 0 Jul 20 01:59:18.139: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 01:59:18.139: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 01:59:18.139: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 01:59:18.139: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 01:59:18.139: INFO: agnhost-primary-rlzh7 from kubectl-4747 started at 2020-07-20 01:59:09 +0000 UTC (1 container statuses recorded) Jul 20 01:59:18.139: INFO: Container agnhost-primary ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-57a4d7e5-474e-49b5-8dad-0cdab7893cd8 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-57a4d7e5-474e-49b5-8dad-0cdab7893cd8 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-57a4d7e5-474e-49b5-8dad-0cdab7893cd8 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:59:38.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7444" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:20.706 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":294,"completed":45,"skipped":958,"failed":0} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:59:38.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-1864 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1864 STEP: Deleting pre-stop pod Jul 20 01:59:54.053: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 01:59:54.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1864" for this suite. • [SLOW TEST:15.389 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":294,"completed":46,"skipped":958,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 01:59:54.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0720 02:00:06.857864 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 20 02:01:08.878: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jul 20 02:01:08.878: INFO: Deleting pod "simpletest-rc-to-be-deleted-29knk" in namespace "gc-7201" Jul 20 02:01:08.933: INFO: Deleting pod "simpletest-rc-to-be-deleted-2bh96" in namespace "gc-7201" Jul 20 02:01:08.975: INFO: Deleting pod "simpletest-rc-to-be-deleted-2pnxq" in namespace "gc-7201" Jul 20 02:01:09.023: INFO: Deleting pod "simpletest-rc-to-be-deleted-4tnn6" in namespace "gc-7201" Jul 20 02:01:09.402: INFO: Deleting pod "simpletest-rc-to-be-deleted-9qbmr" in namespace "gc-7201" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:01:09.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7201" for this suite. • [SLOW TEST:75.752 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":294,"completed":47,"skipped":966,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:01:09.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 20 02:01:20.192: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 02:01:20.213: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 02:01:22.213: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 02:01:22.218: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 02:01:24.213: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 02:01:24.217: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 02:01:26.213: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 02:01:26.218: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 02:01:28.213: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 02:01:28.218: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 02:01:30.213: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 02:01:30.218: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 02:01:32.213: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 02:01:32.218: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 02:01:34.213: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 02:01:34.217: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:01:34.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8125" for this suite. • [SLOW TEST:24.364 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":294,"completed":48,"skipped":970,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:01:34.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 20 02:01:34.541: INFO: Waiting up to 5m0s for pod "pod-6d38398e-7899-4c80-b496-6371aa8b409e" in namespace "emptydir-7775" to be "Succeeded or Failed" Jul 20 02:01:34.562: INFO: Pod "pod-6d38398e-7899-4c80-b496-6371aa8b409e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.468075ms Jul 20 02:01:36.566: INFO: Pod "pod-6d38398e-7899-4c80-b496-6371aa8b409e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024895658s Jul 20 02:01:38.571: INFO: Pod "pod-6d38398e-7899-4c80-b496-6371aa8b409e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029395057s STEP: Saw pod success Jul 20 02:01:38.571: INFO: Pod "pod-6d38398e-7899-4c80-b496-6371aa8b409e" satisfied condition "Succeeded or Failed" Jul 20 02:01:38.575: INFO: Trying to get logs from node latest-worker2 pod pod-6d38398e-7899-4c80-b496-6371aa8b409e container test-container: STEP: delete the pod Jul 20 02:01:38.612: INFO: Waiting for pod pod-6d38398e-7899-4c80-b496-6371aa8b409e to disappear Jul 20 02:01:38.629: INFO: Pod pod-6d38398e-7899-4c80-b496-6371aa8b409e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:01:38.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7775" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":49,"skipped":974,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:01:38.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:01:39.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7568" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":294,"completed":50,"skipped":995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:01:39.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-60956cde-8740-4e79-83c2-7c405c96ad90 in namespace container-probe-8406 Jul 20 02:01:45.112: INFO: Started pod busybox-60956cde-8740-4e79-83c2-7c405c96ad90 in namespace container-probe-8406 STEP: checking the pod's current state and verifying that restartCount is present Jul 20 02:01:45.114: INFO: Initial restart count of pod busybox-60956cde-8740-4e79-83c2-7c405c96ad90 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:05:46.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8406" for this suite. • [SLOW TEST:247.566 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":51,"skipped":1029,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:05:46.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:05:47.103: INFO: Creating ReplicaSet my-hostname-basic-09ccd9b1-7237-4620-a748-ca0f6d5cfe1b Jul 20 02:05:47.266: INFO: Pod name my-hostname-basic-09ccd9b1-7237-4620-a748-ca0f6d5cfe1b: Found 0 pods out of 1 Jul 20 02:05:52.272: INFO: Pod name my-hostname-basic-09ccd9b1-7237-4620-a748-ca0f6d5cfe1b: Found 1 pods out of 1 Jul 20 02:05:52.272: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-09ccd9b1-7237-4620-a748-ca0f6d5cfe1b" is running Jul 20 02:05:52.275: INFO: Pod "my-hostname-basic-09ccd9b1-7237-4620-a748-ca0f6d5cfe1b-tfzqx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 02:05:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 02:05:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 02:05:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 02:05:47 +0000 UTC Reason: Message:}]) Jul 20 02:05:52.276: INFO: Trying to dial the pod Jul 20 02:05:57.291: INFO: Controller my-hostname-basic-09ccd9b1-7237-4620-a748-ca0f6d5cfe1b: Got expected result from replica 1 [my-hostname-basic-09ccd9b1-7237-4620-a748-ca0f6d5cfe1b-tfzqx]: "my-hostname-basic-09ccd9b1-7237-4620-a748-ca0f6d5cfe1b-tfzqx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:05:57.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6298" for this suite. • [SLOW TEST:10.707 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":52,"skipped":1038,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:05:57.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 20 02:06:01.996: INFO: Successfully updated pod "labelsupdated51600e7-8439-4d55-a286-80579065af6b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:06:06.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7652" for this suite. • [SLOW TEST:8.732 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":53,"skipped":1045,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:06:06.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:06:06.133: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 20 02:06:06.158: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 20 02:06:11.162: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 20 02:06:11.162: INFO: Creating deployment "test-rolling-update-deployment" Jul 20 02:06:11.167: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 20 02:06:11.172: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 20 02:06:13.211: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 20 02:06:13.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807571, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807571, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807571, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807571, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-5887db9c6b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:06:15.219: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 20 02:06:15.229: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7648 /apis/apps/v1/namespaces/deployment-7648/deployments/test-rolling-update-deployment 8c999e80-5723-412d-948b-71627a7448ae 88902 1 2020-07-20 02:06:11 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-07-20 02:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-20 02:06:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002a1f6e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-20 02:06:11 +0000 UTC,LastTransitionTime:2020-07-20 02:06:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-5887db9c6b" has successfully progressed.,LastUpdateTime:2020-07-20 02:06:14 +0000 UTC,LastTransitionTime:2020-07-20 02:06:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 20 02:06:15.233: INFO: New ReplicaSet "test-rolling-update-deployment-5887db9c6b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-5887db9c6b deployment-7648 /apis/apps/v1/namespaces/deployment-7648/replicasets/test-rolling-update-deployment-5887db9c6b 6d5d461a-0620-4d53-9f2b-aea26f7f9417 88891 1 2020-07-20 02:06:11 +0000 UTC map[name:sample-pod pod-template-hash:5887db9c6b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 8c999e80-5723-412d-948b-71627a7448ae 0xc002db2ba7 0xc002db2ba8}] [] [{kube-controller-manager Update apps/v1 2020-07-20 02:06:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c999e80-5723-412d-948b-71627a7448ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 5887db9c6b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:5887db9c6b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002db2c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:06:15.233: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 20 02:06:15.233: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7648 /apis/apps/v1/namespaces/deployment-7648/replicasets/test-rolling-update-controller 4894dce9-58bb-47f2-9e95-786e41b80c5c 88901 2 2020-07-20 02:06:06 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 8c999e80-5723-412d-948b-71627a7448ae 0xc002db2a97 0xc002db2a98}] [] [{e2e.test Update apps/v1 2020-07-20 02:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-20 02:06:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c999e80-5723-412d-948b-71627a7448ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002db2b38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:06:15.237: INFO: Pod "test-rolling-update-deployment-5887db9c6b-l9g95" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-5887db9c6b-l9g95 test-rolling-update-deployment-5887db9c6b- deployment-7648 /api/v1/namespaces/deployment-7648/pods/test-rolling-update-deployment-5887db9c6b-l9g95 4f40d0ff-a390-4240-9f8c-9d1aeca32a01 88890 0 2020-07-20 02:06:11 +0000 UTC map[name:sample-pod pod-template-hash:5887db9c6b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-5887db9c6b 6d5d461a-0620-4d53-9f2b-aea26f7f9417 0xc002db3117 0xc002db3118}] [] [{kube-controller-manager Update v1 2020-07-20 02:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d5d461a-0620-4d53-9f2b-aea26f7f9417\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:06:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.123\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-g5hjs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-g5hjs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-g5hjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:06:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:06:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.123,StartTime:2020-07-20 02:06:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:06:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://4fa0ac21d8dfa27690f3b6bb6978ce6b1ee550a201dcdbcc814b8707d619ca3c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:06:15.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7648" for this suite. • [SLOW TEST:9.195 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":54,"skipped":1057,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:06:15.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:06:28.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4775" for this suite. • [SLOW TEST:13.310 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":294,"completed":55,"skipped":1068,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:06:28.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Jul 20 02:06:33.171: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1835 pod-service-account-117a91c7-d898-4aa4-9824-268449caebb4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jul 20 02:06:33.416: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1835 pod-service-account-117a91c7-d898-4aa4-9824-268449caebb4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jul 20 02:06:33.629: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1835 pod-service-account-117a91c7-d898-4aa4-9824-268449caebb4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:06:33.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1835" for this suite. • [SLOW TEST:5.366 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":294,"completed":56,"skipped":1078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:06:33.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:307 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jul 20 02:06:34.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6827' Jul 20 02:06:34.351: INFO: stderr: "" Jul 20 02:06:34.351: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 20 02:06:34.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6827' Jul 20 02:06:34.502: INFO: stderr: "" Jul 20 02:06:34.502: INFO: stdout: "update-demo-nautilus-5j72l update-demo-nautilus-bqnxc " Jul 20 02:06:34.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j72l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:34.597: INFO: stderr: "" Jul 20 02:06:34.597: INFO: stdout: "" Jul 20 02:06:34.597: INFO: update-demo-nautilus-5j72l is created but not running Jul 20 02:06:39.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6827' Jul 20 02:06:39.697: INFO: stderr: "" Jul 20 02:06:39.697: INFO: stdout: "update-demo-nautilus-5j72l update-demo-nautilus-bqnxc " Jul 20 02:06:39.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j72l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:39.795: INFO: stderr: "" Jul 20 02:06:39.795: INFO: stdout: "true" Jul 20 02:06:39.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j72l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:39.901: INFO: stderr: "" Jul 20 02:06:39.901: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 02:06:39.901: INFO: validating pod update-demo-nautilus-5j72l Jul 20 02:06:39.905: INFO: got data: { "image": "nautilus.jpg" } Jul 20 02:06:39.905: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 02:06:39.905: INFO: update-demo-nautilus-5j72l is verified up and running Jul 20 02:06:39.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqnxc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:40.000: INFO: stderr: "" Jul 20 02:06:40.000: INFO: stdout: "true" Jul 20 02:06:40.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqnxc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:40.110: INFO: stderr: "" Jul 20 02:06:40.110: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 02:06:40.110: INFO: validating pod update-demo-nautilus-bqnxc Jul 20 02:06:40.114: INFO: got data: { "image": "nautilus.jpg" } Jul 20 02:06:40.114: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 02:06:40.114: INFO: update-demo-nautilus-bqnxc is verified up and running STEP: scaling down the replication controller Jul 20 02:06:40.117: INFO: scanned /root for discovery docs: Jul 20 02:06:40.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6827' Jul 20 02:06:41.252: INFO: stderr: "" Jul 20 02:06:41.252: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 20 02:06:41.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6827' Jul 20 02:06:41.497: INFO: stderr: "" Jul 20 02:06:41.497: INFO: stdout: "update-demo-nautilus-5j72l update-demo-nautilus-bqnxc " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 20 02:06:46.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6827' Jul 20 02:06:46.620: INFO: stderr: "" Jul 20 02:06:46.620: INFO: stdout: "update-demo-nautilus-5j72l update-demo-nautilus-bqnxc " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 20 02:06:51.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6827' Jul 20 02:06:51.744: INFO: stderr: "" Jul 20 02:06:51.744: INFO: stdout: "update-demo-nautilus-5j72l update-demo-nautilus-bqnxc " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 20 02:06:56.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6827' Jul 20 02:06:56.847: INFO: stderr: "" Jul 20 02:06:56.848: INFO: stdout: "update-demo-nautilus-bqnxc " Jul 20 02:06:56.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqnxc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:56.949: INFO: stderr: "" Jul 20 02:06:56.949: INFO: stdout: "true" Jul 20 02:06:56.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqnxc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:57.041: INFO: stderr: "" Jul 20 02:06:57.041: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 02:06:57.041: INFO: validating pod update-demo-nautilus-bqnxc Jul 20 02:06:57.044: INFO: got data: { "image": "nautilus.jpg" } Jul 20 02:06:57.044: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 02:06:57.045: INFO: update-demo-nautilus-bqnxc is verified up and running STEP: scaling up the replication controller Jul 20 02:06:57.047: INFO: scanned /root for discovery docs: Jul 20 02:06:57.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6827' Jul 20 02:06:58.166: INFO: stderr: "" Jul 20 02:06:58.166: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 20 02:06:58.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6827' Jul 20 02:06:58.267: INFO: stderr: "" Jul 20 02:06:58.267: INFO: stdout: "update-demo-nautilus-bqnxc update-demo-nautilus-zq782 " Jul 20 02:06:58.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqnxc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:58.364: INFO: stderr: "" Jul 20 02:06:58.364: INFO: stdout: "true" Jul 20 02:06:58.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqnxc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:58.465: INFO: stderr: "" Jul 20 02:06:58.465: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 02:06:58.465: INFO: validating pod update-demo-nautilus-bqnxc Jul 20 02:06:58.468: INFO: got data: { "image": "nautilus.jpg" } Jul 20 02:06:58.468: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 02:06:58.468: INFO: update-demo-nautilus-bqnxc is verified up and running Jul 20 02:06:58.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zq782 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:06:58.654: INFO: stderr: "" Jul 20 02:06:58.654: INFO: stdout: "" Jul 20 02:06:58.654: INFO: update-demo-nautilus-zq782 is created but not running Jul 20 02:07:03.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6827' Jul 20 02:07:03.752: INFO: stderr: "" Jul 20 02:07:03.752: INFO: stdout: "update-demo-nautilus-bqnxc update-demo-nautilus-zq782 " Jul 20 02:07:03.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqnxc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:07:03.847: INFO: stderr: "" Jul 20 02:07:03.847: INFO: stdout: "true" Jul 20 02:07:03.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqnxc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:07:03.951: INFO: stderr: "" Jul 20 02:07:03.951: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 02:07:03.951: INFO: validating pod update-demo-nautilus-bqnxc Jul 20 02:07:03.957: INFO: got data: { "image": "nautilus.jpg" } Jul 20 02:07:03.957: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 02:07:03.957: INFO: update-demo-nautilus-bqnxc is verified up and running Jul 20 02:07:03.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zq782 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:07:04.052: INFO: stderr: "" Jul 20 02:07:04.052: INFO: stdout: "true" Jul 20 02:07:04.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zq782 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6827' Jul 20 02:07:04.140: INFO: stderr: "" Jul 20 02:07:04.140: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 02:07:04.140: INFO: validating pod update-demo-nautilus-zq782 Jul 20 02:07:04.144: INFO: got data: { "image": "nautilus.jpg" } Jul 20 02:07:04.144: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 02:07:04.144: INFO: update-demo-nautilus-zq782 is verified up and running STEP: using delete to clean up resources Jul 20 02:07:04.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6827' Jul 20 02:07:04.273: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 02:07:04.273: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 20 02:07:04.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6827' Jul 20 02:07:04.379: INFO: stderr: "No resources found in kubectl-6827 namespace.\n" Jul 20 02:07:04.379: INFO: stdout: "" Jul 20 02:07:04.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6827 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 20 02:07:04.495: INFO: stderr: "" Jul 20 02:07:04.495: INFO: stdout: "update-demo-nautilus-bqnxc\nupdate-demo-nautilus-zq782\n" Jul 20 02:07:04.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6827' Jul 20 02:07:05.107: INFO: stderr: "No resources found in kubectl-6827 namespace.\n" Jul 20 02:07:05.107: INFO: stdout: "" Jul 20 02:07:05.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6827 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 20 02:07:05.225: INFO: stderr: "" Jul 20 02:07:05.225: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:07:05.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6827" for this suite. • [SLOW TEST:31.308 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:305 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":294,"completed":57,"skipped":1130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:07:05.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:09:05.568: INFO: Deleting pod "var-expansion-46510fa8-a810-4e91-a35a-ef56b100cc22" in namespace "var-expansion-4605" Jul 20 02:09:05.573: INFO: Wait up to 5m0s for pod "var-expansion-46510fa8-a810-4e91-a35a-ef56b100cc22" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:09:07.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4605" for this suite. • [SLOW TEST:122.471 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":294,"completed":58,"skipped":1155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:09:07.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:09:07.812: INFO: Creating deployment "test-recreate-deployment" Jul 20 02:09:07.816: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 20 02:09:07.853: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jul 20 02:09:09.862: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 20 02:09:09.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807747, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807747, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807748, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807747, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-7589bf48bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:09:11.898: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 20 02:09:11.906: INFO: Updating deployment test-recreate-deployment Jul 20 02:09:11.906: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 20 02:09:12.768: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3932 /apis/apps/v1/namespaces/deployment-3932/deployments/test-recreate-deployment 4d5bbebd-9f8b-4c81-a9d9-e88efda4624d 89654 2 2020-07-20 02:09:07 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-20 02:09:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-20 02:09:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0018f4728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-20 02:09:12 +0000 UTC,LastTransitionTime:2020-07-20 02:09:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-07-20 02:09:12 +0000 UTC,LastTransitionTime:2020-07-20 02:09:07 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jul 20 02:09:12.990: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-3932 /apis/apps/v1/namespaces/deployment-3932/replicasets/test-recreate-deployment-f79dd4667 d7863b47-dc6a-4e08-aa0d-20aeb4003b00 89653 1 2020-07-20 02:09:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 4d5bbebd-9f8b-4c81-a9d9-e88efda4624d 0xc0018f4e20 0xc0018f4e21}] [] [{kube-controller-manager Update apps/v1 2020-07-20 02:09:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d5bbebd-9f8b-4c81-a9d9-e88efda4624d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0018f4e98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:09:12.990: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 20 02:09:12.990: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-7589bf48bb deployment-3932 /apis/apps/v1/namespaces/deployment-3932/replicasets/test-recreate-deployment-7589bf48bb 9d446509-f5c5-4ab9-84e2-a05436649437 89643 2 2020-07-20 02:09:07 +0000 UTC map[name:sample-pod-3 pod-template-hash:7589bf48bb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 4d5bbebd-9f8b-4c81-a9d9-e88efda4624d 0xc0018f4d07 0xc0018f4d08}] [] [{kube-controller-manager Update apps/v1 2020-07-20 02:09:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d5bbebd-9f8b-4c81-a9d9-e88efda4624d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7589bf48bb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:7589bf48bb] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0018f4db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:09:13.211: INFO: Pod "test-recreate-deployment-f79dd4667-98sdk" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-98sdk test-recreate-deployment-f79dd4667- deployment-3932 /api/v1/namespaces/deployment-3932/pods/test-recreate-deployment-f79dd4667-98sdk b102bf7f-7371-43d8-b1ac-76479ae4adb0 89655 0 2020-07-20 02:09:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 d7863b47-dc6a-4e08-aa0d-20aeb4003b00 0xc0018f5370 0xc0018f5371}] [] [{kube-controller-manager Update v1 2020-07-20 02:09:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7863b47-dc6a-4e08-aa0d-20aeb4003b00\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:09:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ls6lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ls6lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ls6lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:09:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:09:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:09:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:09:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-07-20 02:09:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:09:13.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3932" for this suite. • [SLOW TEST:5.548 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":59,"skipped":1198,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:09:13.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-2cb904e9-e334-4163-be83-93aa55eccd98 STEP: Creating a pod to test consume configMaps Jul 20 02:09:13.396: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a560213-f2ba-4bf3-ab5b-61cc3a5e8fb9" in namespace "projected-9160" to be "Succeeded or Failed" Jul 20 02:09:13.426: INFO: Pod "pod-projected-configmaps-2a560213-f2ba-4bf3-ab5b-61cc3a5e8fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.228409ms Jul 20 02:09:15.521: INFO: Pod "pod-projected-configmaps-2a560213-f2ba-4bf3-ab5b-61cc3a5e8fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125327496s Jul 20 02:09:17.539: INFO: Pod "pod-projected-configmaps-2a560213-f2ba-4bf3-ab5b-61cc3a5e8fb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142813191s STEP: Saw pod success Jul 20 02:09:17.539: INFO: Pod "pod-projected-configmaps-2a560213-f2ba-4bf3-ab5b-61cc3a5e8fb9" satisfied condition "Succeeded or Failed" Jul 20 02:09:17.541: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-2a560213-f2ba-4bf3-ab5b-61cc3a5e8fb9 container projected-configmap-volume-test: STEP: delete the pod Jul 20 02:09:17.614: INFO: Waiting for pod pod-projected-configmaps-2a560213-f2ba-4bf3-ab5b-61cc3a5e8fb9 to disappear Jul 20 02:09:17.632: INFO: Pod pod-projected-configmaps-2a560213-f2ba-4bf3-ab5b-61cc3a5e8fb9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:09:17.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9160" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":60,"skipped":1219,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:09:17.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:09:17.793: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 20 02:09:17.852: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:17.868: INFO: Number of nodes with available pods: 0 Jul 20 02:09:17.868: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:09:19.014: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:19.018: INFO: Number of nodes with available pods: 0 Jul 20 02:09:19.018: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:09:19.893: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:19.897: INFO: Number of nodes with available pods: 0 Jul 20 02:09:19.897: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:09:20.873: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:20.877: INFO: Number of nodes with available pods: 0 Jul 20 02:09:20.877: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:09:21.874: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:21.878: INFO: Number of nodes with available pods: 1 Jul 20 02:09:21.878: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:09:22.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:22.898: INFO: Number of nodes with available pods: 2 Jul 20 02:09:22.898: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 20 02:09:23.009: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:23.009: INFO: Wrong image for pod: daemon-set-hshsv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:23.028: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:24.033: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:24.033: INFO: Wrong image for pod: daemon-set-hshsv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:24.037: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:25.033: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:25.033: INFO: Wrong image for pod: daemon-set-hshsv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:25.033: INFO: Pod daemon-set-hshsv is not available Jul 20 02:09:25.036: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:26.033: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:26.033: INFO: Pod daemon-set-z48l7 is not available Jul 20 02:09:26.037: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:27.034: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:27.034: INFO: Pod daemon-set-z48l7 is not available Jul 20 02:09:27.038: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:28.033: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:28.033: INFO: Pod daemon-set-z48l7 is not available Jul 20 02:09:28.037: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:29.033: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:29.036: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:30.033: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:30.037: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:31.151: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:31.151: INFO: Pod daemon-set-4fjpn is not available Jul 20 02:09:31.155: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:32.034: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:32.034: INFO: Pod daemon-set-4fjpn is not available Jul 20 02:09:32.038: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:33.033: INFO: Wrong image for pod: daemon-set-4fjpn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 02:09:33.033: INFO: Pod daemon-set-4fjpn is not available Jul 20 02:09:33.037: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:34.043: INFO: Pod daemon-set-czrkp is not available Jul 20 02:09:34.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 20 02:09:34.073: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:34.094: INFO: Number of nodes with available pods: 1 Jul 20 02:09:34.095: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:09:35.099: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:35.103: INFO: Number of nodes with available pods: 1 Jul 20 02:09:35.103: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:09:36.344: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:36.397: INFO: Number of nodes with available pods: 1 Jul 20 02:09:36.397: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:09:37.100: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:37.104: INFO: Number of nodes with available pods: 1 Jul 20 02:09:37.104: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:09:38.099: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:09:38.103: INFO: Number of nodes with available pods: 2 Jul 20 02:09:38.103: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4119, will wait for the garbage collector to delete the pods Jul 20 02:09:38.174: INFO: Deleting DaemonSet.extensions daemon-set took: 5.993722ms Jul 20 02:09:38.274: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.23197ms Jul 20 02:09:43.904: INFO: Number of nodes with available pods: 0 Jul 20 02:09:43.904: INFO: Number of running nodes: 0, number of available pods: 0 Jul 20 02:09:43.907: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4119/daemonsets","resourceVersion":"89883"},"items":null} Jul 20 02:09:43.910: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4119/pods","resourceVersion":"89883"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:09:43.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4119" for this suite. • [SLOW TEST:26.224 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":294,"completed":61,"skipped":1236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:09:43.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 02:09:44.444: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 02:09:46.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807784, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807784, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807784, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807784, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:09:49.475: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:09:49.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5915-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:09:50.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-174" for this suite. STEP: Destroying namespace "webhook-174-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.736 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":294,"completed":62,"skipped":1275,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:09:50.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 20 02:09:50.733: INFO: Waiting up to 5m0s for pod "pod-c3736ad1-48b2-4097-a6f5-5112de172b6b" in namespace "emptydir-7223" to be "Succeeded or Failed" Jul 20 02:09:50.761: INFO: Pod "pod-c3736ad1-48b2-4097-a6f5-5112de172b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.687294ms Jul 20 02:09:52.764: INFO: Pod "pod-c3736ad1-48b2-4097-a6f5-5112de172b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031515489s Jul 20 02:09:54.769: INFO: Pod "pod-c3736ad1-48b2-4097-a6f5-5112de172b6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035806748s STEP: Saw pod success Jul 20 02:09:54.769: INFO: Pod "pod-c3736ad1-48b2-4097-a6f5-5112de172b6b" satisfied condition "Succeeded or Failed" Jul 20 02:09:54.772: INFO: Trying to get logs from node latest-worker2 pod pod-c3736ad1-48b2-4097-a6f5-5112de172b6b container test-container: STEP: delete the pod Jul 20 02:09:54.825: INFO: Waiting for pod pod-c3736ad1-48b2-4097-a6f5-5112de172b6b to disappear Jul 20 02:09:54.835: INFO: Pod pod-c3736ad1-48b2-4097-a6f5-5112de172b6b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:09:54.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7223" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":63,"skipped":1277,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:09:54.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Jul 20 02:09:54.973: INFO: Waiting up to 1m0s for all nodes to be ready Jul 20 02:10:54.997: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Jul 20 02:10:55.017: INFO: Created pod: pod0-sched-preemption-low-priority Jul 20 02:10:55.063: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:11:09.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-538" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:74.439 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":294,"completed":64,"skipped":1296,"failed":0} SSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:11:09.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jul 20 02:11:09.921: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jul 20 02:11:09.925: INFO: starting watch STEP: patching STEP: updating Jul 20 02:11:10.133: INFO: waiting for watch events with expected annotations Jul 20 02:11:10.133: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:11:10.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-5253" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":294,"completed":65,"skipped":1300,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:11:10.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 02:11:10.937: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 02:11:13.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807870, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807870, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807871, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807870, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:11:16.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jul 20 02:11:22.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config attach --namespace=webhook-6040 to-be-attached-pod -i -c=container1' Jul 20 02:11:25.326: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:11:25.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6040" for this suite. STEP: Destroying namespace "webhook-6040-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.156 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":294,"completed":66,"skipped":1306,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:11:25.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 20 02:11:25.598: INFO: Waiting up to 5m0s for pod "pod-4409c4fe-e51b-459e-9a50-383b7ad53444" in namespace "emptydir-9177" to be "Succeeded or Failed" Jul 20 02:11:25.609: INFO: Pod "pod-4409c4fe-e51b-459e-9a50-383b7ad53444": Phase="Pending", Reason="", readiness=false. Elapsed: 10.469211ms Jul 20 02:11:27.613: INFO: Pod "pod-4409c4fe-e51b-459e-9a50-383b7ad53444": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014346394s Jul 20 02:11:29.617: INFO: Pod "pod-4409c4fe-e51b-459e-9a50-383b7ad53444": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018863036s STEP: Saw pod success Jul 20 02:11:29.617: INFO: Pod "pod-4409c4fe-e51b-459e-9a50-383b7ad53444" satisfied condition "Succeeded or Failed" Jul 20 02:11:29.620: INFO: Trying to get logs from node latest-worker2 pod pod-4409c4fe-e51b-459e-9a50-383b7ad53444 container test-container: STEP: delete the pod Jul 20 02:11:29.729: INFO: Waiting for pod pod-4409c4fe-e51b-459e-9a50-383b7ad53444 to disappear Jul 20 02:11:29.766: INFO: Pod pod-4409c4fe-e51b-459e-9a50-383b7ad53444 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:11:29.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9177" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":67,"skipped":1326,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:11:29.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 02:11:30.631: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 02:11:32.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807890, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807890, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807890, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730807890, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:11:35.702: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:11:35.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6750-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:11:36.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7196" for this suite. STEP: Destroying namespace "webhook-7196-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.321 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":294,"completed":68,"skipped":1327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:11:37.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:11:53.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6512" for this suite. • [SLOW TEST:16.344 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":294,"completed":69,"skipped":1370,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:11:53.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-e0ad0be7-dda5-434f-b763-1f595f7598da STEP: Creating a pod to test consume secrets Jul 20 02:11:53.595: INFO: Waiting up to 5m0s for pod "pod-secrets-a8ccc735-3919-432f-b006-7a6616b1b491" in namespace "secrets-2095" to be "Succeeded or Failed" Jul 20 02:11:53.599: INFO: Pod "pod-secrets-a8ccc735-3919-432f-b006-7a6616b1b491": Phase="Pending", Reason="", readiness=false. Elapsed: 3.548588ms Jul 20 02:11:55.602: INFO: Pod "pod-secrets-a8ccc735-3919-432f-b006-7a6616b1b491": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007359842s Jul 20 02:11:57.606: INFO: Pod "pod-secrets-a8ccc735-3919-432f-b006-7a6616b1b491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010833322s STEP: Saw pod success Jul 20 02:11:57.606: INFO: Pod "pod-secrets-a8ccc735-3919-432f-b006-7a6616b1b491" satisfied condition "Succeeded or Failed" Jul 20 02:11:57.608: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a8ccc735-3919-432f-b006-7a6616b1b491 container secret-volume-test: STEP: delete the pod Jul 20 02:11:57.646: INFO: Waiting for pod pod-secrets-a8ccc735-3919-432f-b006-7a6616b1b491 to disappear Jul 20 02:11:57.653: INFO: Pod pod-secrets-a8ccc735-3919-432f-b006-7a6616b1b491 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:11:57.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2095" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":70,"skipped":1379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:11:57.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3575 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 20 02:11:57.761: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 20 02:11:57.831: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:11:59.919: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:12:01.835: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:12:03.835: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:12:05.835: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:12:07.835: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:12:09.835: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:12:11.835: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:12:13.835: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:12:15.835: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 20 02:12:15.842: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 20 02:12:17.846: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 20 02:12:21.907: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.9 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3575 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:12:21.907: INFO: >>> kubeConfig: /root/.kube/config I0720 02:12:21.945139 8 log.go:181] (0xc002104b00) (0xc00316d900) Create stream I0720 02:12:21.945175 8 log.go:181] (0xc002104b00) (0xc00316d900) Stream added, broadcasting: 1 I0720 02:12:21.950291 8 log.go:181] (0xc002104b00) Reply frame received for 1 I0720 02:12:21.950351 8 log.go:181] (0xc002104b00) (0xc0011c9e00) Create stream I0720 02:12:21.950365 8 log.go:181] (0xc002104b00) (0xc0011c9e00) Stream added, broadcasting: 3 I0720 02:12:21.951416 8 log.go:181] (0xc002104b00) Reply frame received for 3 I0720 02:12:21.951470 8 log.go:181] (0xc002104b00) (0xc0018e0960) Create stream I0720 02:12:21.951496 8 log.go:181] (0xc002104b00) (0xc0018e0960) Stream added, broadcasting: 5 I0720 02:12:21.952368 8 log.go:181] (0xc002104b00) Reply frame received for 5 I0720 02:12:23.022554 8 log.go:181] (0xc002104b00) Data frame received for 3 I0720 02:12:23.022588 8 log.go:181] (0xc0011c9e00) (3) Data frame handling I0720 02:12:23.022609 8 log.go:181] (0xc0011c9e00) (3) Data frame sent I0720 02:12:23.022640 8 log.go:181] (0xc002104b00) Data frame received for 3 I0720 02:12:23.022653 8 log.go:181] (0xc0011c9e00) (3) Data frame handling I0720 02:12:23.022810 8 log.go:181] (0xc002104b00) Data frame received for 5 I0720 02:12:23.022837 8 log.go:181] (0xc0018e0960) (5) Data frame handling I0720 02:12:23.024618 8 log.go:181] (0xc002104b00) Data frame received for 1 I0720 02:12:23.024636 8 log.go:181] (0xc00316d900) (1) Data frame handling I0720 02:12:23.024643 8 log.go:181] (0xc00316d900) (1) Data frame sent I0720 02:12:23.024651 8 log.go:181] (0xc002104b00) (0xc00316d900) Stream removed, broadcasting: 1 I0720 02:12:23.024667 8 log.go:181] (0xc002104b00) Go away received I0720 02:12:23.025085 8 log.go:181] (0xc002104b00) (0xc00316d900) Stream removed, broadcasting: 1 I0720 02:12:23.025109 8 log.go:181] (0xc002104b00) (0xc0011c9e00) Stream removed, broadcasting: 3 I0720 02:12:23.025128 8 log.go:181] (0xc002104b00) (0xc0018e0960) Stream removed, broadcasting: 5 Jul 20 02:12:23.025: INFO: Found all expected endpoints: [netserver-0] Jul 20 02:12:23.027: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.140 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3575 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:12:23.027: INFO: >>> kubeConfig: /root/.kube/config I0720 02:12:23.052970 8 log.go:181] (0xc0024b98c0) (0xc0013bcaa0) Create stream I0720 02:12:23.052997 8 log.go:181] (0xc0024b98c0) (0xc0013bcaa0) Stream added, broadcasting: 1 I0720 02:12:23.061390 8 log.go:181] (0xc0024b98c0) Reply frame received for 1 I0720 02:12:23.061434 8 log.go:181] (0xc0024b98c0) (0xc0013bc140) Create stream I0720 02:12:23.061448 8 log.go:181] (0xc0024b98c0) (0xc0013bc140) Stream added, broadcasting: 3 I0720 02:12:23.062258 8 log.go:181] (0xc0024b98c0) Reply frame received for 3 I0720 02:12:23.062299 8 log.go:181] (0xc0024b98c0) (0xc00316c000) Create stream I0720 02:12:23.062309 8 log.go:181] (0xc0024b98c0) (0xc00316c000) Stream added, broadcasting: 5 I0720 02:12:23.063108 8 log.go:181] (0xc0024b98c0) Reply frame received for 5 I0720 02:12:24.133725 8 log.go:181] (0xc0024b98c0) Data frame received for 3 I0720 02:12:24.133768 8 log.go:181] (0xc0013bc140) (3) Data frame handling I0720 02:12:24.133788 8 log.go:181] (0xc0013bc140) (3) Data frame sent I0720 02:12:24.133956 8 log.go:181] (0xc0024b98c0) Data frame received for 3 I0720 02:12:24.133985 8 log.go:181] (0xc0013bc140) (3) Data frame handling I0720 02:12:24.134009 8 log.go:181] (0xc0024b98c0) Data frame received for 5 I0720 02:12:24.134040 8 log.go:181] (0xc00316c000) (5) Data frame handling I0720 02:12:24.135246 8 log.go:181] (0xc0024b98c0) Data frame received for 1 I0720 02:12:24.135277 8 log.go:181] (0xc0013bcaa0) (1) Data frame handling I0720 02:12:24.135286 8 log.go:181] (0xc0013bcaa0) (1) Data frame sent I0720 02:12:24.135295 8 log.go:181] (0xc0024b98c0) (0xc0013bcaa0) Stream removed, broadcasting: 1 I0720 02:12:24.135310 8 log.go:181] (0xc0024b98c0) Go away received I0720 02:12:24.135409 8 log.go:181] (0xc0024b98c0) (0xc0013bcaa0) Stream removed, broadcasting: 1 I0720 02:12:24.135428 8 log.go:181] (0xc0024b98c0) (0xc0013bc140) Stream removed, broadcasting: 3 I0720 02:12:24.135440 8 log.go:181] (0xc0024b98c0) (0xc00316c000) Stream removed, broadcasting: 5 Jul 20 02:12:24.135: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:12:24.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3575" for this suite. • [SLOW TEST:26.482 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":71,"skipped":1420,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:12:24.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:12:24.268: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jul 20 02:12:24.278: INFO: Number of nodes with available pods: 0 Jul 20 02:12:24.278: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jul 20 02:12:24.368: INFO: Number of nodes with available pods: 0 Jul 20 02:12:24.368: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:25.371: INFO: Number of nodes with available pods: 0 Jul 20 02:12:25.371: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:26.371: INFO: Number of nodes with available pods: 0 Jul 20 02:12:26.371: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:27.372: INFO: Number of nodes with available pods: 0 Jul 20 02:12:27.372: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:28.372: INFO: Number of nodes with available pods: 1 Jul 20 02:12:28.372: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jul 20 02:12:28.411: INFO: Number of nodes with available pods: 1 Jul 20 02:12:28.411: INFO: Number of running nodes: 0, number of available pods: 1 Jul 20 02:12:29.650: INFO: Number of nodes with available pods: 0 Jul 20 02:12:29.650: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jul 20 02:12:29.709: INFO: Number of nodes with available pods: 0 Jul 20 02:12:29.709: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:30.728: INFO: Number of nodes with available pods: 0 Jul 20 02:12:30.728: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:31.713: INFO: Number of nodes with available pods: 0 Jul 20 02:12:31.713: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:32.757: INFO: Number of nodes with available pods: 0 Jul 20 02:12:32.757: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:33.713: INFO: Number of nodes with available pods: 0 Jul 20 02:12:33.713: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:34.714: INFO: Number of nodes with available pods: 0 Jul 20 02:12:34.714: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:35.714: INFO: Number of nodes with available pods: 0 Jul 20 02:12:35.714: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:36.714: INFO: Number of nodes with available pods: 0 Jul 20 02:12:36.714: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:37.714: INFO: Number of nodes with available pods: 0 Jul 20 02:12:37.714: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:38.720: INFO: Number of nodes with available pods: 0 Jul 20 02:12:38.720: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:39.713: INFO: Number of nodes with available pods: 0 Jul 20 02:12:39.713: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:40.714: INFO: Number of nodes with available pods: 0 Jul 20 02:12:40.714: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:41.715: INFO: Number of nodes with available pods: 0 Jul 20 02:12:41.715: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:42.714: INFO: Number of nodes with available pods: 0 Jul 20 02:12:42.714: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:43.714: INFO: Number of nodes with available pods: 0 Jul 20 02:12:43.714: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:44.714: INFO: Number of nodes with available pods: 0 Jul 20 02:12:44.714: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:45.944: INFO: Number of nodes with available pods: 0 Jul 20 02:12:45.944: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:12:46.729: INFO: Number of nodes with available pods: 1 Jul 20 02:12:46.729: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8309, will wait for the garbage collector to delete the pods Jul 20 02:12:46.792: INFO: Deleting DaemonSet.extensions daemon-set took: 5.324241ms Jul 20 02:12:47.192: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.220464ms Jul 20 02:12:53.919: INFO: Number of nodes with available pods: 0 Jul 20 02:12:53.919: INFO: Number of running nodes: 0, number of available pods: 0 Jul 20 02:12:53.922: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8309/daemonsets","resourceVersion":"91017"},"items":null} Jul 20 02:12:53.924: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8309/pods","resourceVersion":"91017"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:12:53.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8309" for this suite. • [SLOW TEST:29.824 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":294,"completed":72,"skipped":1424,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:12:53.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 20 02:12:54.077: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:13:01.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6233" for this suite. • [SLOW TEST:7.779 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":294,"completed":73,"skipped":1435,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:13:01.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:13:05.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2769" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":74,"skipped":1437,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:13:05.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-ecbe55c1-b2db-4c88-8011-163edc35de4e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ecbe55c1-b2db-4c88-8011-163edc35de4e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:13:12.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8258" for this suite. • [SLOW TEST:6.417 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":75,"skipped":1441,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:13:12.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2276 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-2276 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2276 Jul 20 02:13:12.397: INFO: Found 0 stateful pods, waiting for 1 Jul 20 02:13:22.465: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 20 02:13:22.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2276 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 02:13:22.753: INFO: stderr: "I0720 02:13:22.603375 1211 log.go:181] (0xc000f2a000) (0xc0001fe280) Create stream\nI0720 02:13:22.603433 1211 log.go:181] (0xc000f2a000) (0xc0001fe280) Stream added, broadcasting: 1\nI0720 02:13:22.605378 1211 log.go:181] (0xc000f2a000) Reply frame received for 1\nI0720 02:13:22.605417 1211 log.go:181] (0xc000f2a000) (0xc0001ffea0) Create stream\nI0720 02:13:22.605430 1211 log.go:181] (0xc000f2a000) (0xc0001ffea0) Stream added, broadcasting: 3\nI0720 02:13:22.606366 1211 log.go:181] (0xc000f2a000) Reply frame received for 3\nI0720 02:13:22.606402 1211 log.go:181] (0xc000f2a000) (0xc000354780) Create stream\nI0720 02:13:22.606423 1211 log.go:181] (0xc000f2a000) (0xc000354780) Stream added, broadcasting: 5\nI0720 02:13:22.607562 1211 log.go:181] (0xc000f2a000) Reply frame received for 5\nI0720 02:13:22.669121 1211 log.go:181] (0xc000f2a000) Data frame received for 5\nI0720 02:13:22.669144 1211 log.go:181] (0xc000354780) (5) Data frame handling\nI0720 02:13:22.669158 1211 log.go:181] (0xc000354780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 02:13:22.744074 1211 log.go:181] (0xc000f2a000) Data frame received for 3\nI0720 02:13:22.744126 1211 log.go:181] (0xc0001ffea0) (3) Data frame handling\nI0720 02:13:22.744167 1211 log.go:181] (0xc0001ffea0) (3) Data frame sent\nI0720 02:13:22.744354 1211 log.go:181] (0xc000f2a000) Data frame received for 5\nI0720 02:13:22.744373 1211 log.go:181] (0xc000354780) (5) Data frame handling\nI0720 02:13:22.744470 1211 log.go:181] (0xc000f2a000) Data frame received for 3\nI0720 02:13:22.744502 1211 log.go:181] (0xc0001ffea0) (3) Data frame handling\nI0720 02:13:22.746869 1211 log.go:181] (0xc000f2a000) Data frame received for 1\nI0720 02:13:22.746897 1211 log.go:181] (0xc0001fe280) (1) Data frame handling\nI0720 02:13:22.746912 1211 log.go:181] (0xc0001fe280) (1) Data frame sent\nI0720 02:13:22.746995 1211 log.go:181] (0xc000f2a000) (0xc0001fe280) Stream removed, broadcasting: 1\nI0720 02:13:22.747161 1211 log.go:181] (0xc000f2a000) Go away received\nI0720 02:13:22.747669 1211 log.go:181] (0xc000f2a000) (0xc0001fe280) Stream removed, broadcasting: 1\nI0720 02:13:22.747705 1211 log.go:181] (0xc000f2a000) (0xc0001ffea0) Stream removed, broadcasting: 3\nI0720 02:13:22.747722 1211 log.go:181] (0xc000f2a000) (0xc000354780) Stream removed, broadcasting: 5\n" Jul 20 02:13:22.753: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 02:13:22.753: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 02:13:22.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 20 02:13:32.761: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 20 02:13:32.761: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 02:13:33.399: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:13:33.399: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:13:33.399: INFO: Jul 20 02:13:33.399: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 20 02:13:34.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.370604492s Jul 20 02:13:35.705: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.147733232s Jul 20 02:13:36.987: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.064183801s Jul 20 02:13:38.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.78291084s Jul 20 02:13:39.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.668472218s Jul 20 02:13:40.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.665439642s Jul 20 02:13:41.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.639148163s Jul 20 02:13:42.310: INFO: Verifying statefulset ss doesn't scale past 3 for another 555.027522ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2276 Jul 20 02:13:43.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2276 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 02:13:43.548: INFO: stderr: "I0720 02:13:43.467882 1229 log.go:181] (0xc000c974a0) (0xc000330be0) Create stream\nI0720 02:13:43.467972 1229 log.go:181] (0xc000c974a0) (0xc000330be0) Stream added, broadcasting: 1\nI0720 02:13:43.470710 1229 log.go:181] (0xc000c974a0) Reply frame received for 1\nI0720 02:13:43.470750 1229 log.go:181] (0xc000c974a0) (0xc000331360) Create stream\nI0720 02:13:43.470764 1229 log.go:181] (0xc000c974a0) (0xc000331360) Stream added, broadcasting: 3\nI0720 02:13:43.471943 1229 log.go:181] (0xc000c974a0) Reply frame received for 3\nI0720 02:13:43.471980 1229 log.go:181] (0xc000c974a0) (0xc0003d4320) Create stream\nI0720 02:13:43.471994 1229 log.go:181] (0xc000c974a0) (0xc0003d4320) Stream added, broadcasting: 5\nI0720 02:13:43.473243 1229 log.go:181] (0xc000c974a0) Reply frame received for 5\nI0720 02:13:43.541439 1229 log.go:181] (0xc000c974a0) Data frame received for 5\nI0720 02:13:43.541479 1229 log.go:181] (0xc0003d4320) (5) Data frame handling\nI0720 02:13:43.541504 1229 log.go:181] (0xc0003d4320) (5) Data frame sent\nI0720 02:13:43.541518 1229 log.go:181] (0xc000c974a0) Data frame received for 5\nI0720 02:13:43.541529 1229 log.go:181] (0xc0003d4320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 02:13:43.541615 1229 log.go:181] (0xc000c974a0) Data frame received for 3\nI0720 02:13:43.541632 1229 log.go:181] (0xc000331360) (3) Data frame handling\nI0720 02:13:43.541646 1229 log.go:181] (0xc000331360) (3) Data frame sent\nI0720 02:13:43.541651 1229 log.go:181] (0xc000c974a0) Data frame received for 3\nI0720 02:13:43.541657 1229 log.go:181] (0xc000331360) (3) Data frame handling\nI0720 02:13:43.542915 1229 log.go:181] (0xc000c974a0) Data frame received for 1\nI0720 02:13:43.542942 1229 log.go:181] (0xc000330be0) (1) Data frame handling\nI0720 02:13:43.542956 1229 log.go:181] (0xc000330be0) (1) Data frame sent\nI0720 02:13:43.542981 1229 log.go:181] (0xc000c974a0) (0xc000330be0) Stream removed, broadcasting: 1\nI0720 02:13:43.543436 1229 log.go:181] (0xc000c974a0) (0xc000330be0) Stream removed, broadcasting: 1\nI0720 02:13:43.543454 1229 log.go:181] (0xc000c974a0) (0xc000331360) Stream removed, broadcasting: 3\nI0720 02:13:43.543463 1229 log.go:181] (0xc000c974a0) (0xc0003d4320) Stream removed, broadcasting: 5\n" Jul 20 02:13:43.548: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 02:13:43.548: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 02:13:43.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2276 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 02:13:43.759: INFO: stderr: "I0720 02:13:43.686227 1243 log.go:181] (0xc0005cb340) (0xc000bdf540) Create stream\nI0720 02:13:43.686291 1243 log.go:181] (0xc0005cb340) (0xc000bdf540) Stream added, broadcasting: 1\nI0720 02:13:43.689082 1243 log.go:181] (0xc0005cb340) Reply frame received for 1\nI0720 02:13:43.689123 1243 log.go:181] (0xc0005cb340) (0xc000bdf5e0) Create stream\nI0720 02:13:43.689138 1243 log.go:181] (0xc0005cb340) (0xc000bdf5e0) Stream added, broadcasting: 3\nI0720 02:13:43.690897 1243 log.go:181] (0xc0005cb340) Reply frame received for 3\nI0720 02:13:43.690938 1243 log.go:181] (0xc0005cb340) (0xc000bb8b40) Create stream\nI0720 02:13:43.690966 1243 log.go:181] (0xc0005cb340) (0xc000bb8b40) Stream added, broadcasting: 5\nI0720 02:13:43.691883 1243 log.go:181] (0xc0005cb340) Reply frame received for 5\nI0720 02:13:43.752844 1243 log.go:181] (0xc0005cb340) Data frame received for 5\nI0720 02:13:43.752881 1243 log.go:181] (0xc000bb8b40) (5) Data frame handling\nI0720 02:13:43.752895 1243 log.go:181] (0xc000bb8b40) (5) Data frame sent\nI0720 02:13:43.752904 1243 log.go:181] (0xc0005cb340) Data frame received for 5\nI0720 02:13:43.752913 1243 log.go:181] (0xc000bb8b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0720 02:13:43.752924 1243 log.go:181] (0xc0005cb340) Data frame received for 3\nI0720 02:13:43.752994 1243 log.go:181] (0xc000bdf5e0) (3) Data frame handling\nI0720 02:13:43.753026 1243 log.go:181] (0xc000bdf5e0) (3) Data frame sent\nI0720 02:13:43.753043 1243 log.go:181] (0xc0005cb340) Data frame received for 3\nI0720 02:13:43.753049 1243 log.go:181] (0xc000bdf5e0) (3) Data frame handling\nI0720 02:13:43.754617 1243 log.go:181] (0xc0005cb340) Data frame received for 1\nI0720 02:13:43.754641 1243 log.go:181] (0xc000bdf540) (1) Data frame handling\nI0720 02:13:43.754663 1243 log.go:181] (0xc000bdf540) (1) Data frame sent\nI0720 02:13:43.754678 1243 log.go:181] (0xc0005cb340) (0xc000bdf540) Stream removed, broadcasting: 1\nI0720 02:13:43.754696 1243 log.go:181] (0xc0005cb340) Go away received\nI0720 02:13:43.755196 1243 log.go:181] (0xc0005cb340) (0xc000bdf540) Stream removed, broadcasting: 1\nI0720 02:13:43.755223 1243 log.go:181] (0xc0005cb340) (0xc000bdf5e0) Stream removed, broadcasting: 3\nI0720 02:13:43.755236 1243 log.go:181] (0xc0005cb340) (0xc000bb8b40) Stream removed, broadcasting: 5\n" Jul 20 02:13:43.759: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 02:13:43.759: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 02:13:43.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2276 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 02:13:44.038: INFO: stderr: "I0720 02:13:43.956464 1260 log.go:181] (0xc000d94c60) (0xc0009f45a0) Create stream\nI0720 02:13:43.956561 1260 log.go:181] (0xc000d94c60) (0xc0009f45a0) Stream added, broadcasting: 1\nI0720 02:13:43.960896 1260 log.go:181] (0xc000d94c60) Reply frame received for 1\nI0720 02:13:43.960937 1260 log.go:181] (0xc000d94c60) (0xc00092d220) Create stream\nI0720 02:13:43.960951 1260 log.go:181] (0xc000d94c60) (0xc00092d220) Stream added, broadcasting: 3\nI0720 02:13:43.961806 1260 log.go:181] (0xc000d94c60) Reply frame received for 3\nI0720 02:13:43.961831 1260 log.go:181] (0xc000d94c60) (0xc000916780) Create stream\nI0720 02:13:43.961840 1260 log.go:181] (0xc000d94c60) (0xc000916780) Stream added, broadcasting: 5\nI0720 02:13:43.962681 1260 log.go:181] (0xc000d94c60) Reply frame received for 5\nI0720 02:13:44.031944 1260 log.go:181] (0xc000d94c60) Data frame received for 3\nI0720 02:13:44.031975 1260 log.go:181] (0xc00092d220) (3) Data frame handling\nI0720 02:13:44.031983 1260 log.go:181] (0xc00092d220) (3) Data frame sent\nI0720 02:13:44.031989 1260 log.go:181] (0xc000d94c60) Data frame received for 3\nI0720 02:13:44.031994 1260 log.go:181] (0xc00092d220) (3) Data frame handling\nI0720 02:13:44.032015 1260 log.go:181] (0xc000d94c60) Data frame received for 5\nI0720 02:13:44.032022 1260 log.go:181] (0xc000916780) (5) Data frame handling\nI0720 02:13:44.032028 1260 log.go:181] (0xc000916780) (5) Data frame sent\nI0720 02:13:44.032033 1260 log.go:181] (0xc000d94c60) Data frame received for 5\nI0720 02:13:44.032038 1260 log.go:181] (0xc000916780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0720 02:13:44.033634 1260 log.go:181] (0xc000d94c60) Data frame received for 1\nI0720 02:13:44.033650 1260 log.go:181] (0xc0009f45a0) (1) Data frame handling\nI0720 02:13:44.033656 1260 log.go:181] (0xc0009f45a0) (1) Data frame sent\nI0720 02:13:44.033675 1260 log.go:181] (0xc000d94c60) (0xc0009f45a0) Stream removed, broadcasting: 1\nI0720 02:13:44.033755 1260 log.go:181] (0xc000d94c60) Go away received\nI0720 02:13:44.033951 1260 log.go:181] (0xc000d94c60) (0xc0009f45a0) Stream removed, broadcasting: 1\nI0720 02:13:44.033967 1260 log.go:181] (0xc000d94c60) (0xc00092d220) Stream removed, broadcasting: 3\nI0720 02:13:44.033973 1260 log.go:181] (0xc000d94c60) (0xc000916780) Stream removed, broadcasting: 5\n" Jul 20 02:13:44.038: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 02:13:44.038: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 02:13:44.042: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jul 20 02:13:54.047: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 02:13:54.047: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 02:13:54.047: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 20 02:13:54.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2276 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 02:13:54.247: INFO: stderr: "I0720 02:13:54.191721 1278 log.go:181] (0xc0004f33f0) (0xc000a8d220) Create stream\nI0720 02:13:54.191795 1278 log.go:181] (0xc0004f33f0) (0xc000a8d220) Stream added, broadcasting: 1\nI0720 02:13:54.193973 1278 log.go:181] (0xc0004f33f0) Reply frame received for 1\nI0720 02:13:54.194006 1278 log.go:181] (0xc0004f33f0) (0xc0003088c0) Create stream\nI0720 02:13:54.194017 1278 log.go:181] (0xc0004f33f0) (0xc0003088c0) Stream added, broadcasting: 3\nI0720 02:13:54.195123 1278 log.go:181] (0xc0004f33f0) Reply frame received for 3\nI0720 02:13:54.195159 1278 log.go:181] (0xc0004f33f0) (0xc000aa9400) Create stream\nI0720 02:13:54.195173 1278 log.go:181] (0xc0004f33f0) (0xc000aa9400) Stream added, broadcasting: 5\nI0720 02:13:54.196058 1278 log.go:181] (0xc0004f33f0) Reply frame received for 5\nI0720 02:13:54.241298 1278 log.go:181] (0xc0004f33f0) Data frame received for 5\nI0720 02:13:54.241338 1278 log.go:181] (0xc000aa9400) (5) Data frame handling\nI0720 02:13:54.241355 1278 log.go:181] (0xc000aa9400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 02:13:54.241364 1278 log.go:181] (0xc0004f33f0) Data frame received for 5\nI0720 02:13:54.241392 1278 log.go:181] (0xc000aa9400) (5) Data frame handling\nI0720 02:13:54.241413 1278 log.go:181] (0xc0004f33f0) Data frame received for 3\nI0720 02:13:54.241423 1278 log.go:181] (0xc0003088c0) (3) Data frame handling\nI0720 02:13:54.241435 1278 log.go:181] (0xc0003088c0) (3) Data frame sent\nI0720 02:13:54.241448 1278 log.go:181] (0xc0004f33f0) Data frame received for 3\nI0720 02:13:54.241457 1278 log.go:181] (0xc0003088c0) (3) Data frame handling\nI0720 02:13:54.242469 1278 log.go:181] (0xc0004f33f0) Data frame received for 1\nI0720 02:13:54.242489 1278 log.go:181] (0xc000a8d220) (1) Data frame handling\nI0720 02:13:54.242501 1278 log.go:181] (0xc000a8d220) (1) Data frame sent\nI0720 02:13:54.242522 1278 log.go:181] (0xc0004f33f0) (0xc000a8d220) Stream removed, broadcasting: 1\nI0720 02:13:54.242603 1278 log.go:181] (0xc0004f33f0) Go away received\nI0720 02:13:54.242904 1278 log.go:181] (0xc0004f33f0) (0xc000a8d220) Stream removed, broadcasting: 1\nI0720 02:13:54.242920 1278 log.go:181] (0xc0004f33f0) (0xc0003088c0) Stream removed, broadcasting: 3\nI0720 02:13:54.242927 1278 log.go:181] (0xc0004f33f0) (0xc000aa9400) Stream removed, broadcasting: 5\n" Jul 20 02:13:54.247: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 02:13:54.247: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 02:13:54.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2276 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 02:13:54.483: INFO: stderr: "I0720 02:13:54.382385 1296 log.go:181] (0xc000e22d10) (0xc000b812c0) Create stream\nI0720 02:13:54.382436 1296 log.go:181] (0xc000e22d10) (0xc000b812c0) Stream added, broadcasting: 1\nI0720 02:13:54.383821 1296 log.go:181] (0xc000e22d10) Reply frame received for 1\nI0720 02:13:54.383928 1296 log.go:181] (0xc000e22d10) (0xc000b78000) Create stream\nI0720 02:13:54.383948 1296 log.go:181] (0xc000e22d10) (0xc000b78000) Stream added, broadcasting: 3\nI0720 02:13:54.384830 1296 log.go:181] (0xc000e22d10) Reply frame received for 3\nI0720 02:13:54.384862 1296 log.go:181] (0xc000e22d10) (0xc000b70500) Create stream\nI0720 02:13:54.384883 1296 log.go:181] (0xc000e22d10) (0xc000b70500) Stream added, broadcasting: 5\nI0720 02:13:54.385513 1296 log.go:181] (0xc000e22d10) Reply frame received for 5\nI0720 02:13:54.443813 1296 log.go:181] (0xc000e22d10) Data frame received for 5\nI0720 02:13:54.443858 1296 log.go:181] (0xc000b70500) (5) Data frame handling\nI0720 02:13:54.443897 1296 log.go:181] (0xc000b70500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 02:13:54.474991 1296 log.go:181] (0xc000e22d10) Data frame received for 3\nI0720 02:13:54.475009 1296 log.go:181] (0xc000b78000) (3) Data frame handling\nI0720 02:13:54.475015 1296 log.go:181] (0xc000b78000) (3) Data frame sent\nI0720 02:13:54.475407 1296 log.go:181] (0xc000e22d10) Data frame received for 5\nI0720 02:13:54.475422 1296 log.go:181] (0xc000b70500) (5) Data frame handling\nI0720 02:13:54.475885 1296 log.go:181] (0xc000e22d10) Data frame received for 3\nI0720 02:13:54.475954 1296 log.go:181] (0xc000b78000) (3) Data frame handling\nI0720 02:13:54.477525 1296 log.go:181] (0xc000e22d10) Data frame received for 1\nI0720 02:13:54.477540 1296 log.go:181] (0xc000b812c0) (1) Data frame handling\nI0720 02:13:54.477552 1296 log.go:181] (0xc000b812c0) (1) Data frame sent\nI0720 02:13:54.477565 1296 log.go:181] (0xc000e22d10) (0xc000b812c0) Stream removed, broadcasting: 1\nI0720 02:13:54.477590 1296 log.go:181] (0xc000e22d10) Go away received\nI0720 02:13:54.477986 1296 log.go:181] (0xc000e22d10) (0xc000b812c0) Stream removed, broadcasting: 1\nI0720 02:13:54.478010 1296 log.go:181] (0xc000e22d10) (0xc000b78000) Stream removed, broadcasting: 3\nI0720 02:13:54.478021 1296 log.go:181] (0xc000e22d10) (0xc000b70500) Stream removed, broadcasting: 5\n" Jul 20 02:13:54.483: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 02:13:54.483: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 02:13:54.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2276 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 02:13:54.756: INFO: stderr: "I0720 02:13:54.632328 1313 log.go:181] (0xc000eacdc0) (0xc00085b400) Create stream\nI0720 02:13:54.632401 1313 log.go:181] (0xc000eacdc0) (0xc00085b400) Stream added, broadcasting: 1\nI0720 02:13:54.634770 1313 log.go:181] (0xc000eacdc0) Reply frame received for 1\nI0720 02:13:54.634800 1313 log.go:181] (0xc000eacdc0) (0xc000527540) Create stream\nI0720 02:13:54.634820 1313 log.go:181] (0xc000eacdc0) (0xc000527540) Stream added, broadcasting: 3\nI0720 02:13:54.635586 1313 log.go:181] (0xc000eacdc0) Reply frame received for 3\nI0720 02:13:54.635614 1313 log.go:181] (0xc000eacdc0) (0xc000562b40) Create stream\nI0720 02:13:54.635640 1313 log.go:181] (0xc000eacdc0) (0xc000562b40) Stream added, broadcasting: 5\nI0720 02:13:54.636450 1313 log.go:181] (0xc000eacdc0) Reply frame received for 5\nI0720 02:13:54.701834 1313 log.go:181] (0xc000eacdc0) Data frame received for 5\nI0720 02:13:54.701860 1313 log.go:181] (0xc000562b40) (5) Data frame handling\nI0720 02:13:54.701877 1313 log.go:181] (0xc000562b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 02:13:54.748630 1313 log.go:181] (0xc000eacdc0) Data frame received for 3\nI0720 02:13:54.748670 1313 log.go:181] (0xc000527540) (3) Data frame handling\nI0720 02:13:54.748683 1313 log.go:181] (0xc000527540) (3) Data frame sent\nI0720 02:13:54.748692 1313 log.go:181] (0xc000eacdc0) Data frame received for 3\nI0720 02:13:54.748700 1313 log.go:181] (0xc000527540) (3) Data frame handling\nI0720 02:13:54.748712 1313 log.go:181] (0xc000eacdc0) Data frame received for 5\nI0720 02:13:54.748778 1313 log.go:181] (0xc000562b40) (5) Data frame handling\nI0720 02:13:54.750929 1313 log.go:181] (0xc000eacdc0) Data frame received for 1\nI0720 02:13:54.750965 1313 log.go:181] (0xc00085b400) (1) Data frame handling\nI0720 02:13:54.750988 1313 log.go:181] (0xc00085b400) (1) Data frame sent\nI0720 02:13:54.751005 1313 log.go:181] (0xc000eacdc0) (0xc00085b400) Stream removed, broadcasting: 1\nI0720 02:13:54.751118 1313 log.go:181] (0xc000eacdc0) Go away received\nI0720 02:13:54.751518 1313 log.go:181] (0xc000eacdc0) (0xc00085b400) Stream removed, broadcasting: 1\nI0720 02:13:54.751546 1313 log.go:181] (0xc000eacdc0) (0xc000527540) Stream removed, broadcasting: 3\nI0720 02:13:54.751557 1313 log.go:181] (0xc000eacdc0) (0xc000562b40) Stream removed, broadcasting: 5\n" Jul 20 02:13:54.756: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 02:13:54.756: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 02:13:54.756: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 02:13:54.759: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 20 02:14:04.767: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 20 02:14:04.767: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 20 02:14:04.767: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 20 02:14:04.814: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:14:04.814: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:14:04.814: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:04.814: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:04.814: INFO: Jul 20 02:14:04.814: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 02:14:05.918: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:14:05.918: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:14:05.918: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:05.919: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:05.919: INFO: Jul 20 02:14:05.919: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 02:14:06.923: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:14:06.923: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:14:06.923: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:06.923: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:06.923: INFO: Jul 20 02:14:06.923: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 02:14:07.927: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:14:07.927: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:14:07.927: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:07.927: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:07.927: INFO: Jul 20 02:14:07.927: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 02:14:08.931: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:14:08.931: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:14:08.931: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:08.931: INFO: Jul 20 02:14:08.931: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 20 02:14:09.936: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:14:09.936: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:14:09.936: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:09.936: INFO: Jul 20 02:14:09.936: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 20 02:14:10.941: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:14:10.941: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:14:10.941: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:10.941: INFO: Jul 20 02:14:10.941: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 20 02:14:11.946: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:14:11.946: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:14:11.946: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:11.946: INFO: Jul 20 02:14:11.946: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 20 02:14:12.951: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 02:14:12.951: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:12 +0000 UTC }] Jul 20 02:14:12.951: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 02:13:33 +0000 UTC }] Jul 20 02:14:12.951: INFO: Jul 20 02:14:12.951: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 20 02:14:13.955: INFO: Verifying statefulset ss doesn't scale past 0 for another 822.553482ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2276 Jul 20 02:14:14.959: INFO: Scaling statefulset ss to 0 Jul 20 02:14:14.970: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 20 02:14:14.972: INFO: Deleting all statefulset in ns statefulset-2276 Jul 20 02:14:14.975: INFO: Scaling statefulset ss to 0 Jul 20 02:14:14.983: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 02:14:14.985: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:14:15.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2276" for this suite. • [SLOW TEST:62.759 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":294,"completed":76,"skipped":1441,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:14:15.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-22688499-c9c8-4b8f-baa9-13db33021a5e STEP: Creating a pod to test consume secrets Jul 20 02:14:15.084: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-55fb0f40-3acf-486a-aa28-ea3061adfb4c" in namespace "projected-569" to be "Succeeded or Failed" Jul 20 02:14:15.102: INFO: Pod "pod-projected-secrets-55fb0f40-3acf-486a-aa28-ea3061adfb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.959788ms Jul 20 02:14:17.209: INFO: Pod "pod-projected-secrets-55fb0f40-3acf-486a-aa28-ea3061adfb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125188499s Jul 20 02:14:19.213: INFO: Pod "pod-projected-secrets-55fb0f40-3acf-486a-aa28-ea3061adfb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129402462s Jul 20 02:14:21.449: INFO: Pod "pod-projected-secrets-55fb0f40-3acf-486a-aa28-ea3061adfb4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.364892551s STEP: Saw pod success Jul 20 02:14:21.449: INFO: Pod "pod-projected-secrets-55fb0f40-3acf-486a-aa28-ea3061adfb4c" satisfied condition "Succeeded or Failed" Jul 20 02:14:21.478: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-55fb0f40-3acf-486a-aa28-ea3061adfb4c container secret-volume-test: STEP: delete the pod Jul 20 02:14:21.587: INFO: Waiting for pod pod-projected-secrets-55fb0f40-3acf-486a-aa28-ea3061adfb4c to disappear Jul 20 02:14:21.591: INFO: Pod pod-projected-secrets-55fb0f40-3acf-486a-aa28-ea3061adfb4c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:14:21.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-569" for this suite. • [SLOW TEST:6.734 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":77,"skipped":1461,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:14:21.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-275 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-275 Jul 20 02:14:21.922: INFO: Found 0 stateful pods, waiting for 1 Jul 20 02:14:31.926: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 20 02:14:31.945: INFO: Deleting all statefulset in ns statefulset-275 Jul 20 02:14:31.951: INFO: Scaling statefulset ss to 0 Jul 20 02:14:52.053: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 02:14:52.056: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:14:52.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-275" for this suite. • [SLOW TEST:30.408 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":294,"completed":78,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:14:52.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 20 02:14:52.326: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 20 02:14:57.347: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:14:57.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8169" for this suite. • [SLOW TEST:5.340 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":294,"completed":79,"skipped":1482,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:14:57.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9992 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jul 20 02:14:57.740: INFO: Found 0 stateful pods, waiting for 3 Jul 20 02:15:07.744: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 02:15:07.744: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 02:15:07.744: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 20 02:15:17.745: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 02:15:17.745: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 02:15:17.745: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 20 02:15:17.772: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 20 02:15:27.830: INFO: Updating stateful set ss2 Jul 20 02:15:27.887: INFO: Waiting for Pod statefulset-9992/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jul 20 02:15:38.427: INFO: Found 2 stateful pods, waiting for 3 Jul 20 02:15:48.440: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 02:15:48.440: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 02:15:48.440: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 20 02:15:48.464: INFO: Updating stateful set ss2 Jul 20 02:15:48.511: INFO: Waiting for Pod statefulset-9992/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 20 02:15:58.549: INFO: Updating stateful set ss2 Jul 20 02:15:58.556: INFO: Waiting for StatefulSet statefulset-9992/ss2 to complete update Jul 20 02:15:58.556: INFO: Waiting for Pod statefulset-9992/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 20 02:16:08.565: INFO: Deleting all statefulset in ns statefulset-9992 Jul 20 02:16:08.568: INFO: Scaling statefulset ss2 to 0 Jul 20 02:16:38.591: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 02:16:38.593: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:16:38.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9992" for this suite. • [SLOW TEST:101.136 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":294,"completed":80,"skipped":1488,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:16:38.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Jul 20 02:16:38.735: INFO: Waiting up to 5m0s for pod "var-expansion-59c68c8e-9ff4-418f-bcea-9340c402cb1f" in namespace "var-expansion-1362" to be "Succeeded or Failed" Jul 20 02:16:38.773: INFO: Pod "var-expansion-59c68c8e-9ff4-418f-bcea-9340c402cb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.700301ms Jul 20 02:16:40.777: INFO: Pod "var-expansion-59c68c8e-9ff4-418f-bcea-9340c402cb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041315351s Jul 20 02:16:42.781: INFO: Pod "var-expansion-59c68c8e-9ff4-418f-bcea-9340c402cb1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045655508s STEP: Saw pod success Jul 20 02:16:42.781: INFO: Pod "var-expansion-59c68c8e-9ff4-418f-bcea-9340c402cb1f" satisfied condition "Succeeded or Failed" Jul 20 02:16:42.784: INFO: Trying to get logs from node latest-worker2 pod var-expansion-59c68c8e-9ff4-418f-bcea-9340c402cb1f container dapi-container: STEP: delete the pod Jul 20 02:16:42.961: INFO: Waiting for pod var-expansion-59c68c8e-9ff4-418f-bcea-9340c402cb1f to disappear Jul 20 02:16:42.989: INFO: Pod var-expansion-59c68c8e-9ff4-418f-bcea-9340c402cb1f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:16:42.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1362" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":294,"completed":81,"skipped":1498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:16:43.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:16:43.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-757" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":294,"completed":82,"skipped":1543,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:16:43.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:16:48.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2974" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":294,"completed":83,"skipped":1544,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:16:48.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jul 20 02:16:48.659: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:16:56.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-849" for this suite. • [SLOW TEST:7.855 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":294,"completed":84,"skipped":1562,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:16:56.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:16:56.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3675" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":294,"completed":85,"skipped":1577,"failed":0} SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:16:56.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-5c43d7b4-8ca9-45ed-a13e-f159f02d8edb STEP: Creating secret with name s-test-opt-upd-84a8dc14-4ba0-4ed9-ba38-938497b9419a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5c43d7b4-8ca9-45ed-a13e-f159f02d8edb STEP: Updating secret s-test-opt-upd-84a8dc14-4ba0-4ed9-ba38-938497b9419a STEP: Creating secret with name s-test-opt-create-90afccbe-55c7-49da-9257-be4982f98a31 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:17:06.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2111" for this suite. • [SLOW TEST:10.232 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":86,"skipped":1581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:17:06.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-6872a5d3-a640-42e1-b09c-140c18616748 STEP: Creating a pod to test consume configMaps Jul 20 02:17:07.069: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-21a125c4-ecf2-4aa5-b630-98dddaed51a6" in namespace "projected-1492" to be "Succeeded or Failed" Jul 20 02:17:07.217: INFO: Pod "pod-projected-configmaps-21a125c4-ecf2-4aa5-b630-98dddaed51a6": Phase="Pending", Reason="", readiness=false. Elapsed: 147.915952ms Jul 20 02:17:09.221: INFO: Pod "pod-projected-configmaps-21a125c4-ecf2-4aa5-b630-98dddaed51a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15169693s Jul 20 02:17:11.225: INFO: Pod "pod-projected-configmaps-21a125c4-ecf2-4aa5-b630-98dddaed51a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155908201s STEP: Saw pod success Jul 20 02:17:11.225: INFO: Pod "pod-projected-configmaps-21a125c4-ecf2-4aa5-b630-98dddaed51a6" satisfied condition "Succeeded or Failed" Jul 20 02:17:11.228: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-21a125c4-ecf2-4aa5-b630-98dddaed51a6 container projected-configmap-volume-test: STEP: delete the pod Jul 20 02:17:11.306: INFO: Waiting for pod pod-projected-configmaps-21a125c4-ecf2-4aa5-b630-98dddaed51a6 to disappear Jul 20 02:17:11.309: INFO: Pod pod-projected-configmaps-21a125c4-ecf2-4aa5-b630-98dddaed51a6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:17:11.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1492" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":87,"skipped":1624,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:17:11.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1011 STEP: creating service affinity-clusterip in namespace services-1011 STEP: creating replication controller affinity-clusterip in namespace services-1011 I0720 02:17:11.481406 8 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1011, replica count: 3 I0720 02:17:14.532053 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:17:17.532305 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:17:20.532552 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 02:17:20.538: INFO: Creating new exec pod Jul 20 02:17:25.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1011 execpod-affinityrrzjm -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jul 20 02:17:25.894: INFO: stderr: "I0720 02:17:25.813055 1330 log.go:181] (0xc000f15340) (0xc000f0c3c0) Create stream\nI0720 02:17:25.813121 1330 log.go:181] (0xc000f15340) (0xc000f0c3c0) Stream added, broadcasting: 1\nI0720 02:17:25.817241 1330 log.go:181] (0xc000f15340) Reply frame received for 1\nI0720 02:17:25.817274 1330 log.go:181] (0xc000f15340) (0xc0007d10e0) Create stream\nI0720 02:17:25.817282 1330 log.go:181] (0xc000f15340) (0xc0007d10e0) Stream added, broadcasting: 3\nI0720 02:17:25.818220 1330 log.go:181] (0xc000f15340) Reply frame received for 3\nI0720 02:17:25.818256 1330 log.go:181] (0xc000f15340) (0xc00049e460) Create stream\nI0720 02:17:25.818264 1330 log.go:181] (0xc000f15340) (0xc00049e460) Stream added, broadcasting: 5\nI0720 02:17:25.818973 1330 log.go:181] (0xc000f15340) Reply frame received for 5\nI0720 02:17:25.884650 1330 log.go:181] (0xc000f15340) Data frame received for 5\nI0720 02:17:25.884864 1330 log.go:181] (0xc00049e460) (5) Data frame handling\nI0720 02:17:25.884901 1330 log.go:181] (0xc00049e460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0720 02:17:25.885218 1330 log.go:181] (0xc000f15340) Data frame received for 5\nI0720 02:17:25.885253 1330 log.go:181] (0xc00049e460) (5) Data frame handling\nI0720 02:17:25.885288 1330 log.go:181] (0xc00049e460) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0720 02:17:25.885582 1330 log.go:181] (0xc000f15340) Data frame received for 3\nI0720 02:17:25.885619 1330 log.go:181] (0xc0007d10e0) (3) Data frame handling\nI0720 02:17:25.885664 1330 log.go:181] (0xc000f15340) Data frame received for 5\nI0720 02:17:25.885707 1330 log.go:181] (0xc00049e460) (5) Data frame handling\nI0720 02:17:25.887894 1330 log.go:181] (0xc000f15340) Data frame received for 1\nI0720 02:17:25.887935 1330 log.go:181] (0xc000f0c3c0) (1) Data frame handling\nI0720 02:17:25.887954 1330 log.go:181] (0xc000f0c3c0) (1) Data frame sent\nI0720 02:17:25.887982 1330 log.go:181] (0xc000f15340) (0xc000f0c3c0) Stream removed, broadcasting: 1\nI0720 02:17:25.888011 1330 log.go:181] (0xc000f15340) Go away received\nI0720 02:17:25.888475 1330 log.go:181] (0xc000f15340) (0xc000f0c3c0) Stream removed, broadcasting: 1\nI0720 02:17:25.888499 1330 log.go:181] (0xc000f15340) (0xc0007d10e0) Stream removed, broadcasting: 3\nI0720 02:17:25.888508 1330 log.go:181] (0xc000f15340) (0xc00049e460) Stream removed, broadcasting: 5\n" Jul 20 02:17:25.894: INFO: stdout: "" Jul 20 02:17:25.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1011 execpod-affinityrrzjm -- /bin/sh -x -c nc -zv -t -w 2 10.100.164.87 80' Jul 20 02:17:26.122: INFO: stderr: "I0720 02:17:26.023130 1348 log.go:181] (0xc0009a7a20) (0xc000cc7720) Create stream\nI0720 02:17:26.023178 1348 log.go:181] (0xc0009a7a20) (0xc000cc7720) Stream added, broadcasting: 1\nI0720 02:17:26.027807 1348 log.go:181] (0xc0009a7a20) Reply frame received for 1\nI0720 02:17:26.027851 1348 log.go:181] (0xc0009a7a20) (0xc0005a8aa0) Create stream\nI0720 02:17:26.027865 1348 log.go:181] (0xc0009a7a20) (0xc0005a8aa0) Stream added, broadcasting: 3\nI0720 02:17:26.029064 1348 log.go:181] (0xc0009a7a20) Reply frame received for 3\nI0720 02:17:26.029094 1348 log.go:181] (0xc0009a7a20) (0xc0005a9d60) Create stream\nI0720 02:17:26.029105 1348 log.go:181] (0xc0009a7a20) (0xc0005a9d60) Stream added, broadcasting: 5\nI0720 02:17:26.030073 1348 log.go:181] (0xc0009a7a20) Reply frame received for 5\nI0720 02:17:26.114724 1348 log.go:181] (0xc0009a7a20) Data frame received for 3\nI0720 02:17:26.114763 1348 log.go:181] (0xc0005a8aa0) (3) Data frame handling\nI0720 02:17:26.114787 1348 log.go:181] (0xc0009a7a20) Data frame received for 5\nI0720 02:17:26.114796 1348 log.go:181] (0xc0005a9d60) (5) Data frame handling\nI0720 02:17:26.114807 1348 log.go:181] (0xc0005a9d60) (5) Data frame sent\nI0720 02:17:26.114826 1348 log.go:181] (0xc0009a7a20) Data frame received for 5\nI0720 02:17:26.114836 1348 log.go:181] (0xc0005a9d60) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.164.87 80\nConnection to 10.100.164.87 80 port [tcp/http] succeeded!\nI0720 02:17:26.116086 1348 log.go:181] (0xc0009a7a20) Data frame received for 1\nI0720 02:17:26.116109 1348 log.go:181] (0xc000cc7720) (1) Data frame handling\nI0720 02:17:26.116123 1348 log.go:181] (0xc000cc7720) (1) Data frame sent\nI0720 02:17:26.116139 1348 log.go:181] (0xc0009a7a20) (0xc000cc7720) Stream removed, broadcasting: 1\nI0720 02:17:26.116406 1348 log.go:181] (0xc0009a7a20) Go away received\nI0720 02:17:26.116495 1348 log.go:181] (0xc0009a7a20) (0xc000cc7720) Stream removed, broadcasting: 1\nI0720 02:17:26.116516 1348 log.go:181] (0xc0009a7a20) (0xc0005a8aa0) Stream removed, broadcasting: 3\nI0720 02:17:26.116525 1348 log.go:181] (0xc0009a7a20) (0xc0005a9d60) Stream removed, broadcasting: 5\n" Jul 20 02:17:26.122: INFO: stdout: "" Jul 20 02:17:26.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1011 execpod-affinityrrzjm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.100.164.87:80/ ; done' Jul 20 02:17:26.440: INFO: stderr: "I0720 02:17:26.280268 1366 log.go:181] (0xc00063abb0) (0xc000b30e60) Create stream\nI0720 02:17:26.280364 1366 log.go:181] (0xc00063abb0) (0xc000b30e60) Stream added, broadcasting: 1\nI0720 02:17:26.282629 1366 log.go:181] (0xc00063abb0) Reply frame received for 1\nI0720 02:17:26.282684 1366 log.go:181] (0xc00063abb0) (0xc000b26be0) Create stream\nI0720 02:17:26.282707 1366 log.go:181] (0xc00063abb0) (0xc000b26be0) Stream added, broadcasting: 3\nI0720 02:17:26.283753 1366 log.go:181] (0xc00063abb0) Reply frame received for 3\nI0720 02:17:26.283821 1366 log.go:181] (0xc00063abb0) (0xc000b1e3c0) Create stream\nI0720 02:17:26.283853 1366 log.go:181] (0xc00063abb0) (0xc000b1e3c0) Stream added, broadcasting: 5\nI0720 02:17:26.284850 1366 log.go:181] (0xc00063abb0) Reply frame received for 5\nI0720 02:17:26.336423 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.336445 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.336453 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.336484 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.336510 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.336542 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.344243 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.344264 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.344282 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.344982 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.345060 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.345078 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.345101 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.345108 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.345115 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.351632 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.351651 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.351676 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.352444 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.352483 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.352499 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.352516 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.352538 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.352560 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.356448 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.356461 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.356468 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.356861 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.356880 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.356887 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.357121 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.357153 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.357181 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.361533 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.361557 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.361574 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.362098 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.362114 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.362127 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.362146 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.362156 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.362170 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\nI0720 02:17:26.367651 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.367669 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.367680 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.368229 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.368248 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.368262 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.368443 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.368552 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.368597 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.375071 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.375111 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.375143 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.375434 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.375468 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.375485 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.375510 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.375520 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.375532 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.381865 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.381896 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.381921 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.382868 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.382905 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.382918 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.382949 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.382980 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.383000 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.389512 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.389530 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.389546 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.390952 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.390990 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.391005 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.391024 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.391035 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.391048 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\nI0720 02:17:26.391059 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.391069 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.391092 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\nI0720 02:17:26.395394 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.395422 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.395435 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.396157 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.396182 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.396192 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.396204 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.396214 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.396224 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.403416 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.403442 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.403463 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.403899 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.403913 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.403921 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.403928 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.403953 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.403970 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.409326 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.409348 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.409364 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.409760 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.409777 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.409787 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.409847 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.409870 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.409887 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.415463 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.415487 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.415509 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.415964 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.415982 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.416000 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.416010 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.416025 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.416047 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.419168 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.419184 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.419192 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.419892 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.419912 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.419922 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.419938 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.419947 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.419955 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.426734 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.426755 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.426781 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.426817 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.426840 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\nI0720 02:17:26.426869 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.428423 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.428455 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.428468 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.429000 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.429022 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.429030 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.429045 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.429063 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.429088 1366 log.go:181] (0xc000b1e3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.164.87:80/\nI0720 02:17:26.433524 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.433539 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.433546 1366 log.go:181] (0xc000b26be0) (3) Data frame sent\nI0720 02:17:26.434017 1366 log.go:181] (0xc00063abb0) Data frame received for 3\nI0720 02:17:26.434035 1366 log.go:181] (0xc000b26be0) (3) Data frame handling\nI0720 02:17:26.434205 1366 log.go:181] (0xc00063abb0) Data frame received for 5\nI0720 02:17:26.434218 1366 log.go:181] (0xc000b1e3c0) (5) Data frame handling\nI0720 02:17:26.436054 1366 log.go:181] (0xc00063abb0) Data frame received for 1\nI0720 02:17:26.436073 1366 log.go:181] (0xc000b30e60) (1) Data frame handling\nI0720 02:17:26.436081 1366 log.go:181] (0xc000b30e60) (1) Data frame sent\nI0720 02:17:26.436090 1366 log.go:181] (0xc00063abb0) (0xc000b30e60) Stream removed, broadcasting: 1\nI0720 02:17:26.436100 1366 log.go:181] (0xc00063abb0) Go away received\nI0720 02:17:26.436524 1366 log.go:181] (0xc00063abb0) (0xc000b30e60) Stream removed, broadcasting: 1\nI0720 02:17:26.436546 1366 log.go:181] (0xc00063abb0) (0xc000b26be0) Stream removed, broadcasting: 3\nI0720 02:17:26.436554 1366 log.go:181] (0xc00063abb0) (0xc000b1e3c0) Stream removed, broadcasting: 5\n" Jul 20 02:17:26.441: INFO: stdout: "\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j\naffinity-clusterip-fvv6j" Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Received response from host: affinity-clusterip-fvv6j Jul 20 02:17:26.441: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1011, will wait for the garbage collector to delete the pods Jul 20 02:17:26.619: INFO: Deleting ReplicationController affinity-clusterip took: 24.311106ms Jul 20 02:17:27.119: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.232566ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:17:43.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1011" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:32.610 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":88,"skipped":1635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:17:43.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 02:17:44.754: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 02:17:47.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808264, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808264, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808264, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808264, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:17:49.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808264, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808264, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808264, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808264, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:17:52.109: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:17:52.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2075" for this suite. STEP: Destroying namespace "webhook-2075-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.453 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":294,"completed":89,"skipped":1701,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:17:52.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 20 02:17:57.022: INFO: Successfully updated pod "annotationupdate415d8bdc-8af3-4497-a1c0-b0ab2e7439df" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:18:01.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9971" for this suite. • [SLOW TEST:8.687 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":90,"skipped":1706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:18:01.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:18:01.234: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"726e4a81-889c-4da5-9e10-003e6304d383", Controller:(*bool)(0xc003d10afa), BlockOwnerDeletion:(*bool)(0xc003d10afb)}} Jul 20 02:18:01.284: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1f54f940-7b84-47fd-9aea-e95831b5f453", Controller:(*bool)(0xc003d1db26), BlockOwnerDeletion:(*bool)(0xc003d1db27)}} Jul 20 02:18:01.301: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d6df2b85-855d-4caa-a77e-a54f5d6ee291", Controller:(*bool)(0xc0027b4b76), BlockOwnerDeletion:(*bool)(0xc0027b4b77)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:18:06.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1662" for this suite. • [SLOW TEST:5.343 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":294,"completed":91,"skipped":1736,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:18:06.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:18:06.540: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 20 02:18:11.583: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 20 02:18:11.583: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 20 02:18:11.641: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1851 /apis/apps/v1/namespaces/deployment-1851/deployments/test-cleanup-deployment 722abca0-6649-4990-a7dc-8669531e85b2 93144 1 2020-07-20 02:18:11 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-07-20 02:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002870078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jul 20 02:18:11.660: INFO: New ReplicaSet "test-cleanup-deployment-bccdddf9b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-bccdddf9b deployment-1851 /apis/apps/v1/namespaces/deployment-1851/replicasets/test-cleanup-deployment-bccdddf9b d4d79c62-b2fa-4422-b7b8-01a2e0c6c13d 93147 1 2020-07-20 02:18:11 +0000 UTC map[name:cleanup-pod pod-template-hash:bccdddf9b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 722abca0-6649-4990-a7dc-8669531e85b2 0xc003d8aba0 0xc003d8aba1}] [] [{kube-controller-manager Update apps/v1 2020-07-20 02:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"722abca0-6649-4990-a7dc-8669531e85b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: bccdddf9b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:bccdddf9b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003d8ac18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:18:11.660: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 20 02:18:11.660: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1851 /apis/apps/v1/namespaces/deployment-1851/replicasets/test-cleanup-controller 8fd40000-d807-4640-a5d1-0241f6380812 93146 1 2020-07-20 02:18:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 722abca0-6649-4990-a7dc-8669531e85b2 0xc003d8aa97 0xc003d8aa98}] [] [{e2e.test Update apps/v1 2020-07-20 02:18:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-20 02:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"722abca0-6649-4990-a7dc-8669531e85b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003d8ab38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:18:11.710: INFO: Pod "test-cleanup-controller-8cdlk" is available: &Pod{ObjectMeta:{test-cleanup-controller-8cdlk test-cleanup-controller- deployment-1851 /api/v1/namespaces/deployment-1851/pods/test-cleanup-controller-8cdlk e95b0140-98dc-45b1-9004-328048202426 93134 0 2020-07-20 02:18:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 8fd40000-d807-4640-a5d1-0241f6380812 0xc002870437 0xc002870438}] [] [{kube-controller-manager Update v1 2020-07-20 02:18:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fd40000-d807-4640-a5d1-0241f6380812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:18:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.170\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rdfn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rdfn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rdfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:18:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:18:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:18:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:18:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.170,StartTime:2020-07-20 02:18:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:18:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://53e388c74c74ee13a93ceddc3ccb236a2e4645c5161995389714c65103d9ab0f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:18:11.711: INFO: Pod "test-cleanup-deployment-bccdddf9b-xrbgs" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-bccdddf9b-xrbgs test-cleanup-deployment-bccdddf9b- deployment-1851 /api/v1/namespaces/deployment-1851/pods/test-cleanup-deployment-bccdddf9b-xrbgs 1d08010d-7d63-4335-ad6c-e30b117851d3 93154 0 2020-07-20 02:18:11 +0000 UTC map[name:cleanup-pod pod-template-hash:bccdddf9b] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-bccdddf9b d4d79c62-b2fa-4422-b7b8-01a2e0c6c13d 0xc002870600 0xc002870601}] [] [{kube-controller-manager Update v1 2020-07-20 02:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4d79c62-b2fa-4422-b7b8-01a2e0c6c13d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rdfn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rdfn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rdfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:18:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:18:11.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1851" for this suite. • [SLOW TEST:5.407 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":294,"completed":92,"skipped":1745,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:18:11.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 02:18:11.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9506af54-4b5b-4e97-96ec-8f997b0f39a3" in namespace "downward-api-8715" to be "Succeeded or Failed" Jul 20 02:18:11.934: INFO: Pod "downwardapi-volume-9506af54-4b5b-4e97-96ec-8f997b0f39a3": Phase="Pending", Reason="", readiness=false. Elapsed: 25.088948ms Jul 20 02:18:14.021: INFO: Pod "downwardapi-volume-9506af54-4b5b-4e97-96ec-8f997b0f39a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111553017s Jul 20 02:18:16.025: INFO: Pod "downwardapi-volume-9506af54-4b5b-4e97-96ec-8f997b0f39a3": Phase="Running", Reason="", readiness=true. Elapsed: 4.116184628s Jul 20 02:18:18.029: INFO: Pod "downwardapi-volume-9506af54-4b5b-4e97-96ec-8f997b0f39a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119854715s STEP: Saw pod success Jul 20 02:18:18.029: INFO: Pod "downwardapi-volume-9506af54-4b5b-4e97-96ec-8f997b0f39a3" satisfied condition "Succeeded or Failed" Jul 20 02:18:18.032: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9506af54-4b5b-4e97-96ec-8f997b0f39a3 container client-container: STEP: delete the pod Jul 20 02:18:18.206: INFO: Waiting for pod downwardapi-volume-9506af54-4b5b-4e97-96ec-8f997b0f39a3 to disappear Jul 20 02:18:18.241: INFO: Pod downwardapi-volume-9506af54-4b5b-4e97-96ec-8f997b0f39a3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:18:18.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8715" for this suite. • [SLOW TEST:6.430 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":93,"skipped":1747,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:18:18.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-a85d6c1c-3f2f-4008-a2ed-8817a100758f STEP: Creating secret with name s-test-opt-upd-38f65c42-5cdf-415f-af12-fb7d213870db STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a85d6c1c-3f2f-4008-a2ed-8817a100758f STEP: Updating secret s-test-opt-upd-38f65c42-5cdf-415f-af12-fb7d213870db STEP: Creating secret with name s-test-opt-create-24ba88e4-30de-4c00-abc8-5d16ef3180ba STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:18:28.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-397" for this suite. • [SLOW TEST:10.609 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":94,"skipped":1763,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:18:28.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 20 02:18:28.916: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:18:39.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4687" for this suite. • [SLOW TEST:10.852 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":294,"completed":95,"skipped":1768,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:18:39.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Jul 20 02:18:39.787: INFO: Waiting up to 5m0s for pod "var-expansion-20b10509-7ac8-403c-97b9-2110779fe1cd" in namespace "var-expansion-9914" to be "Succeeded or Failed" Jul 20 02:18:39.791: INFO: Pod "var-expansion-20b10509-7ac8-403c-97b9-2110779fe1cd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.917155ms Jul 20 02:18:41.851: INFO: Pod "var-expansion-20b10509-7ac8-403c-97b9-2110779fe1cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06416464s Jul 20 02:18:43.943: INFO: Pod "var-expansion-20b10509-7ac8-403c-97b9-2110779fe1cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156082331s STEP: Saw pod success Jul 20 02:18:43.943: INFO: Pod "var-expansion-20b10509-7ac8-403c-97b9-2110779fe1cd" satisfied condition "Succeeded or Failed" Jul 20 02:18:43.946: INFO: Trying to get logs from node latest-worker2 pod var-expansion-20b10509-7ac8-403c-97b9-2110779fe1cd container dapi-container: STEP: delete the pod Jul 20 02:18:44.471: INFO: Waiting for pod var-expansion-20b10509-7ac8-403c-97b9-2110779fe1cd to disappear Jul 20 02:18:44.486: INFO: Pod var-expansion-20b10509-7ac8-403c-97b9-2110779fe1cd no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:18:44.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9914" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":294,"completed":96,"skipped":1768,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:18:44.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 20 02:18:44.669: INFO: Waiting up to 5m0s for pod "pod-e5778c5e-32fb-4169-bd63-2514acdca49b" in namespace "emptydir-8140" to be "Succeeded or Failed" Jul 20 02:18:44.674: INFO: Pod "pod-e5778c5e-32fb-4169-bd63-2514acdca49b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.940573ms Jul 20 02:18:46.710: INFO: Pod "pod-e5778c5e-32fb-4169-bd63-2514acdca49b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040870198s Jul 20 02:18:48.715: INFO: Pod "pod-e5778c5e-32fb-4169-bd63-2514acdca49b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045131212s STEP: Saw pod success Jul 20 02:18:48.715: INFO: Pod "pod-e5778c5e-32fb-4169-bd63-2514acdca49b" satisfied condition "Succeeded or Failed" Jul 20 02:18:48.718: INFO: Trying to get logs from node latest-worker2 pod pod-e5778c5e-32fb-4169-bd63-2514acdca49b container test-container: STEP: delete the pod Jul 20 02:18:48.794: INFO: Waiting for pod pod-e5778c5e-32fb-4169-bd63-2514acdca49b to disappear Jul 20 02:18:48.799: INFO: Pod pod-e5778c5e-32fb-4169-bd63-2514acdca49b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:18:48.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8140" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":97,"skipped":1778,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:18:48.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-j9m7 STEP: Creating a pod to test atomic-volume-subpath Jul 20 02:18:49.232: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-j9m7" in namespace "subpath-3421" to be "Succeeded or Failed" Jul 20 02:18:49.237: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.069849ms Jul 20 02:18:51.241: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009552411s Jul 20 02:18:53.246: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 4.014242319s Jul 20 02:18:55.250: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 6.01847147s Jul 20 02:18:57.254: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 8.022771324s Jul 20 02:18:59.259: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 10.02743054s Jul 20 02:19:01.263: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 12.031077783s Jul 20 02:19:03.267: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 14.035564961s Jul 20 02:19:05.272: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 16.04029831s Jul 20 02:19:07.308: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 18.076735773s Jul 20 02:19:09.312: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 20.080326185s Jul 20 02:19:11.316: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Running", Reason="", readiness=true. Elapsed: 22.084083413s Jul 20 02:19:13.320: INFO: Pod "pod-subpath-test-configmap-j9m7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088611402s STEP: Saw pod success Jul 20 02:19:13.320: INFO: Pod "pod-subpath-test-configmap-j9m7" satisfied condition "Succeeded or Failed" Jul 20 02:19:13.323: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-j9m7 container test-container-subpath-configmap-j9m7: STEP: delete the pod Jul 20 02:19:13.394: INFO: Waiting for pod pod-subpath-test-configmap-j9m7 to disappear Jul 20 02:19:13.398: INFO: Pod pod-subpath-test-configmap-j9m7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-j9m7 Jul 20 02:19:13.398: INFO: Deleting pod "pod-subpath-test-configmap-j9m7" in namespace "subpath-3421" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:19:13.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3421" for this suite. • [SLOW TEST:24.597 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":294,"completed":98,"skipped":1809,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:19:13.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8565.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8565.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8565.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8565.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8565.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8565.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 02:19:21.535: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:21.538: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:21.542: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:21.544: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:21.553: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:21.555: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:21.558: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:21.561: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:21.567: INFO: Lookups using dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local] Jul 20 02:19:26.572: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:26.576: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:26.579: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:26.581: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:26.590: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:26.593: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:26.596: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:26.600: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:26.618: INFO: Lookups using dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local] Jul 20 02:19:31.572: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:31.577: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:31.580: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:31.583: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:31.594: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:31.597: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:31.600: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:31.603: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:31.610: INFO: Lookups using dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local] Jul 20 02:19:36.572: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:36.576: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:36.579: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:36.582: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:36.591: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:36.594: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:36.597: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:36.600: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:36.619: INFO: Lookups using dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local] Jul 20 02:19:41.572: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:41.576: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:41.580: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:41.583: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:41.591: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:41.594: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:41.596: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:41.599: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:41.606: INFO: Lookups using dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local] Jul 20 02:19:46.571: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:46.575: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:46.579: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:46.582: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:46.590: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:46.592: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:46.597: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:46.599: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local from pod dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea: the server could not find the requested resource (get pods dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea) Jul 20 02:19:46.781: INFO: Lookups using dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8565.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8565.svc.cluster.local jessie_udp@dns-test-service-2.dns-8565.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8565.svc.cluster.local] Jul 20 02:19:51.606: INFO: DNS probes using dns-8565/dns-test-281e71b1-b55d-496a-8019-cdfd2b0aeeea succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:19:52.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8565" for this suite. • [SLOW TEST:38.927 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":294,"completed":99,"skipped":1816,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:19:52.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Jul 20 02:19:52.454: INFO: Waiting up to 5m0s for pod "var-expansion-3918f5eb-32ba-4b47-b19f-63c38e148018" in namespace "var-expansion-4559" to be "Succeeded or Failed" Jul 20 02:19:52.488: INFO: Pod "var-expansion-3918f5eb-32ba-4b47-b19f-63c38e148018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.512255ms Jul 20 02:19:54.542: INFO: Pod "var-expansion-3918f5eb-32ba-4b47-b19f-63c38e148018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088281512s Jul 20 02:19:56.546: INFO: Pod "var-expansion-3918f5eb-32ba-4b47-b19f-63c38e148018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092188799s Jul 20 02:19:58.550: INFO: Pod "var-expansion-3918f5eb-32ba-4b47-b19f-63c38e148018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09646426s STEP: Saw pod success Jul 20 02:19:58.550: INFO: Pod "var-expansion-3918f5eb-32ba-4b47-b19f-63c38e148018" satisfied condition "Succeeded or Failed" Jul 20 02:19:58.553: INFO: Trying to get logs from node latest-worker2 pod var-expansion-3918f5eb-32ba-4b47-b19f-63c38e148018 container dapi-container: STEP: delete the pod Jul 20 02:19:58.572: INFO: Waiting for pod var-expansion-3918f5eb-32ba-4b47-b19f-63c38e148018 to disappear Jul 20 02:19:58.595: INFO: Pod var-expansion-3918f5eb-32ba-4b47-b19f-63c38e148018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:19:58.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4559" for this suite. • [SLOW TEST:6.268 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":294,"completed":100,"skipped":1834,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:19:58.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0720 02:19:59.729491 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 20 02:21:01.775: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:21:01.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6154" for this suite. • [SLOW TEST:63.178 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":294,"completed":101,"skipped":1836,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:21:01.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 20 02:21:01.835: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 02:21:01.849: INFO: Waiting for terminating namespaces to be deleted... Jul 20 02:21:01.852: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 20 02:21:01.858: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 02:21:01.858: INFO: Container coredns ready: true, restart count 0 Jul 20 02:21:01.858: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Jul 20 02:21:01.858: INFO: Container coredns ready: true, restart count 0 Jul 20 02:21:01.858: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 02:21:01.858: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 02:21:01.858: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 02:21:01.858: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 02:21:01.858: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 02:21:01.858: INFO: Container local-path-provisioner ready: true, restart count 0 Jul 20 02:21:01.858: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 20 02:21:01.862: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 02:21:01.862: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 02:21:01.862: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 02:21:01.862: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1623548e02d97e06], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:21:02.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1524" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":294,"completed":102,"skipped":1852,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:21:02.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:21:08.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4632" for this suite. • [SLOW TEST:5.178 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":294,"completed":103,"skipped":1873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:21:08.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1410 STEP: creating an pod Jul 20 02:21:08.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 --namespace=kubectl-4605 -- logs-generator --log-lines-total 100 --run-duration 20s' Jul 20 02:21:08.345: INFO: stderr: "" Jul 20 02:21:08.345: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Jul 20 02:21:08.345: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jul 20 02:21:08.345: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4605" to be "running and ready, or succeeded" Jul 20 02:21:08.403: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 58.4804ms Jul 20 02:21:10.407: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062463893s Jul 20 02:21:12.861: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.51654018s Jul 20 02:21:12.861: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jul 20 02:21:12.861: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jul 20 02:21:12.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4605' Jul 20 02:21:13.001: INFO: stderr: "" Jul 20 02:21:13.001: INFO: stdout: "I0720 02:21:11.221345 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/q98 270\nI0720 02:21:11.421494 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/d896 246\nI0720 02:21:11.621504 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/vzsz 526\nI0720 02:21:11.821566 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/fl4l 285\nI0720 02:21:12.021530 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/d8t 211\nI0720 02:21:12.221573 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/9d6 248\nI0720 02:21:12.421554 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/ks5 248\nI0720 02:21:12.621540 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/bscx 357\nI0720 02:21:12.821487 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/8lng 273\n" STEP: limiting log lines Jul 20 02:21:13.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4605 --tail=1' Jul 20 02:21:13.123: INFO: stderr: "" Jul 20 02:21:13.123: INFO: stdout: "I0720 02:21:13.021535 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/gnff 298\n" Jul 20 02:21:13.123: INFO: got output "I0720 02:21:13.021535 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/gnff 298\n" STEP: limiting log bytes Jul 20 02:21:13.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4605 --limit-bytes=1' Jul 20 02:21:13.310: INFO: stderr: "" Jul 20 02:21:13.310: INFO: stdout: "I" Jul 20 02:21:13.310: INFO: got output "I" STEP: exposing timestamps Jul 20 02:21:13.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4605 --tail=1 --timestamps' Jul 20 02:21:13.458: INFO: stderr: "" Jul 20 02:21:13.458: INFO: stdout: "2020-07-20T02:21:13.421671516Z I0720 02:21:13.421482 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/n6wm 381\n" Jul 20 02:21:13.458: INFO: got output "2020-07-20T02:21:13.421671516Z I0720 02:21:13.421482 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/n6wm 381\n" STEP: restricting to a time range Jul 20 02:21:15.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4605 --since=1s' Jul 20 02:21:16.084: INFO: stderr: "" Jul 20 02:21:16.084: INFO: stdout: "I0720 02:21:15.221609 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/khps 287\nI0720 02:21:15.421564 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/sllt 312\nI0720 02:21:15.621527 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/rzk9 531\nI0720 02:21:15.821587 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/x6jn 453\nI0720 02:21:16.021471 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/mkj 375\n" Jul 20 02:21:16.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4605 --since=24h' Jul 20 02:21:16.200: INFO: stderr: "" Jul 20 02:21:16.200: INFO: stdout: "I0720 02:21:11.221345 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/q98 270\nI0720 02:21:11.421494 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/d896 246\nI0720 02:21:11.621504 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/vzsz 526\nI0720 02:21:11.821566 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/fl4l 285\nI0720 02:21:12.021530 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/d8t 211\nI0720 02:21:12.221573 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/9d6 248\nI0720 02:21:12.421554 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/ks5 248\nI0720 02:21:12.621540 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/bscx 357\nI0720 02:21:12.821487 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/8lng 273\nI0720 02:21:13.021535 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/gnff 298\nI0720 02:21:13.221507 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/kvh 390\nI0720 02:21:13.421482 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/n6wm 381\nI0720 02:21:13.621508 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/q2t 348\nI0720 02:21:13.821549 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/pbh 435\nI0720 02:21:14.021562 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/fzn 323\nI0720 02:21:14.221532 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/msg 206\nI0720 02:21:14.421545 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/zrwt 315\nI0720 02:21:14.621581 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/rsx 401\nI0720 02:21:14.821585 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/r96 412\nI0720 02:21:15.021527 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/qn89 435\nI0720 02:21:15.221609 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/khps 287\nI0720 02:21:15.421564 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/sllt 312\nI0720 02:21:15.621527 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/rzk9 531\nI0720 02:21:15.821587 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/x6jn 453\nI0720 02:21:16.021471 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/mkj 375\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 Jul 20 02:21:16.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4605' Jul 20 02:21:23.883: INFO: stderr: "" Jul 20 02:21:23.883: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:21:23.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4605" for this suite. • [SLOW TEST:15.804 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1406 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":294,"completed":104,"skipped":1929,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:21:23.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-0b47062f-2ca7-4727-9c0b-4531e81d684e STEP: Creating a pod to test consume configMaps Jul 20 02:21:24.021: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d0881c49-29d4-44f3-994b-5bc2c41ecf60" in namespace "projected-5775" to be "Succeeded or Failed" Jul 20 02:21:24.025: INFO: Pod "pod-projected-configmaps-d0881c49-29d4-44f3-994b-5bc2c41ecf60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211722ms Jul 20 02:21:26.030: INFO: Pod "pod-projected-configmaps-d0881c49-29d4-44f3-994b-5bc2c41ecf60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008598799s Jul 20 02:21:28.034: INFO: Pod "pod-projected-configmaps-d0881c49-29d4-44f3-994b-5bc2c41ecf60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013241893s Jul 20 02:21:30.039: INFO: Pod "pod-projected-configmaps-d0881c49-29d4-44f3-994b-5bc2c41ecf60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017587187s STEP: Saw pod success Jul 20 02:21:30.039: INFO: Pod "pod-projected-configmaps-d0881c49-29d4-44f3-994b-5bc2c41ecf60" satisfied condition "Succeeded or Failed" Jul 20 02:21:30.042: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-d0881c49-29d4-44f3-994b-5bc2c41ecf60 container projected-configmap-volume-test: STEP: delete the pod Jul 20 02:21:30.090: INFO: Waiting for pod pod-projected-configmaps-d0881c49-29d4-44f3-994b-5bc2c41ecf60 to disappear Jul 20 02:21:30.100: INFO: Pod pod-projected-configmaps-d0881c49-29d4-44f3-994b-5bc2c41ecf60 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:21:30.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5775" for this suite. • [SLOW TEST:6.192 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":105,"skipped":1929,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:21:30.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 20 02:21:30.264: INFO: Waiting up to 5m0s for pod "pod-00764c98-1e0c-4215-9c40-3c5fcf01d819" in namespace "emptydir-9487" to be "Succeeded or Failed" Jul 20 02:21:30.310: INFO: Pod "pod-00764c98-1e0c-4215-9c40-3c5fcf01d819": Phase="Pending", Reason="", readiness=false. Elapsed: 45.547186ms Jul 20 02:21:32.314: INFO: Pod "pod-00764c98-1e0c-4215-9c40-3c5fcf01d819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04991415s Jul 20 02:21:34.318: INFO: Pod "pod-00764c98-1e0c-4215-9c40-3c5fcf01d819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054012661s STEP: Saw pod success Jul 20 02:21:34.318: INFO: Pod "pod-00764c98-1e0c-4215-9c40-3c5fcf01d819" satisfied condition "Succeeded or Failed" Jul 20 02:21:34.321: INFO: Trying to get logs from node latest-worker2 pod pod-00764c98-1e0c-4215-9c40-3c5fcf01d819 container test-container: STEP: delete the pod Jul 20 02:21:34.359: INFO: Waiting for pod pod-00764c98-1e0c-4215-9c40-3c5fcf01d819 to disappear Jul 20 02:21:34.425: INFO: Pod pod-00764c98-1e0c-4215-9c40-3c5fcf01d819 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:21:34.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9487" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":106,"skipped":1938,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:21:34.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wd9kd in namespace proxy-5378 I0720 02:21:34.696824 8 runners.go:190] Created replication controller with name: proxy-service-wd9kd, namespace: proxy-5378, replica count: 1 I0720 02:21:35.747223 8 runners.go:190] proxy-service-wd9kd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:21:36.747460 8 runners.go:190] proxy-service-wd9kd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:21:37.747685 8 runners.go:190] proxy-service-wd9kd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:21:38.747904 8 runners.go:190] proxy-service-wd9kd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 02:21:39.748115 8 runners.go:190] proxy-service-wd9kd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 02:21:40.748348 8 runners.go:190] proxy-service-wd9kd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 02:21:41.748564 8 runners.go:190] proxy-service-wd9kd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 02:21:42.748895 8 runners.go:190] proxy-service-wd9kd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 02:21:42.752: INFO: setup took 8.121190058s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jul 20 02:21:42.765: INFO: (0) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 12.826742ms) Jul 20 02:21:42.768: INFO: (0) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 15.971401ms) Jul 20 02:21:42.769: INFO: (0) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 16.257964ms) Jul 20 02:21:42.769: INFO: (0) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 16.093353ms) Jul 20 02:21:42.770: INFO: (0) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 17.197158ms) Jul 20 02:21:42.770: INFO: (0) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 17.018482ms) Jul 20 02:21:42.770: INFO: (0) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 17.080693ms) Jul 20 02:21:42.770: INFO: (0) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 17.028497ms) Jul 20 02:21:42.770: INFO: (0) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 17.307608ms) Jul 20 02:21:42.771: INFO: (0) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 18.831487ms) Jul 20 02:21:42.771: INFO: (0) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 19.063999ms) Jul 20 02:21:42.772: INFO: (0) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 19.287749ms) Jul 20 02:21:42.772: INFO: (0) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 19.179531ms) Jul 20 02:21:42.775: INFO: (0) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test<... (200; 7.649585ms) Jul 20 02:21:42.784: INFO: (1) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 7.69605ms) Jul 20 02:21:42.784: INFO: (1) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 7.816545ms) Jul 20 02:21:42.784: INFO: (1) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 8.183207ms) Jul 20 02:21:42.785: INFO: (1) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 8.785933ms) Jul 20 02:21:42.785: INFO: (1) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 8.990086ms) Jul 20 02:21:42.786: INFO: (1) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 10.034456ms) Jul 20 02:21:42.786: INFO: (1) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 10.125187ms) Jul 20 02:21:42.786: INFO: (1) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 10.213444ms) Jul 20 02:21:42.787: INFO: (1) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 10.758597ms) Jul 20 02:21:42.787: INFO: (1) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 10.86437ms) Jul 20 02:21:42.787: INFO: (1) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 11.23819ms) Jul 20 02:21:42.790: INFO: (2) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 2.743712ms) Jul 20 02:21:42.790: INFO: (2) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test<... (200; 7.588249ms) Jul 20 02:21:42.795: INFO: (2) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 7.571626ms) Jul 20 02:21:42.795: INFO: (2) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 7.603751ms) Jul 20 02:21:42.795: INFO: (2) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 7.582552ms) Jul 20 02:21:42.795: INFO: (2) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 7.638895ms) Jul 20 02:21:42.795: INFO: (2) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 7.777627ms) Jul 20 02:21:42.795: INFO: (2) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 7.820555ms) Jul 20 02:21:42.795: INFO: (2) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 7.832104ms) Jul 20 02:21:42.795: INFO: (2) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 7.850116ms) Jul 20 02:21:42.798: INFO: (3) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 3.063729ms) Jul 20 02:21:42.798: INFO: (3) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 3.055266ms) Jul 20 02:21:42.798: INFO: (3) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 3.101252ms) Jul 20 02:21:42.799: INFO: (3) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 3.30536ms) Jul 20 02:21:42.799: INFO: (3) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 3.319657ms) Jul 20 02:21:42.799: INFO: (3) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test (200; 6.013622ms) Jul 20 02:21:42.807: INFO: (4) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 5.946916ms) Jul 20 02:21:42.807: INFO: (4) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 6.030437ms) Jul 20 02:21:42.807: INFO: (4) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 6.374772ms) Jul 20 02:21:42.807: INFO: (4) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 6.306514ms) Jul 20 02:21:42.807: INFO: (4) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 6.347717ms) Jul 20 02:21:42.807: INFO: (4) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 6.35386ms) Jul 20 02:21:42.807: INFO: (4) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: ... (200; 6.391883ms) Jul 20 02:21:42.807: INFO: (4) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 6.37304ms) Jul 20 02:21:42.807: INFO: (4) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 6.334048ms) Jul 20 02:21:42.809: INFO: (4) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 8.283433ms) Jul 20 02:21:42.809: INFO: (4) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 8.417251ms) Jul 20 02:21:42.809: INFO: (4) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 8.511879ms) Jul 20 02:21:42.810: INFO: (4) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 8.95105ms) Jul 20 02:21:42.813: INFO: (5) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 3.740353ms) Jul 20 02:21:42.813: INFO: (5) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 3.824112ms) Jul 20 02:21:42.815: INFO: (5) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 5.299916ms) Jul 20 02:21:42.815: INFO: (5) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 5.335679ms) Jul 20 02:21:42.815: INFO: (5) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 5.54501ms) Jul 20 02:21:42.815: INFO: (5) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 5.658115ms) Jul 20 02:21:42.816: INFO: (5) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 5.856287ms) Jul 20 02:21:42.816: INFO: (5) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 5.916324ms) Jul 20 02:21:42.816: INFO: (5) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 5.971684ms) Jul 20 02:21:42.816: INFO: (5) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: ... (200; 5.976012ms) Jul 20 02:21:42.816: INFO: (5) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 5.928046ms) Jul 20 02:21:42.816: INFO: (5) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 5.924563ms) Jul 20 02:21:42.816: INFO: (5) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 6.099908ms) Jul 20 02:21:42.816: INFO: (5) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 6.083404ms) Jul 20 02:21:42.816: INFO: (5) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 6.187224ms) Jul 20 02:21:42.821: INFO: (6) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 5.01217ms) Jul 20 02:21:42.821: INFO: (6) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 5.101425ms) Jul 20 02:21:42.822: INFO: (6) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 5.59808ms) Jul 20 02:21:42.822: INFO: (6) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 5.588563ms) Jul 20 02:21:42.822: INFO: (6) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 5.684623ms) Jul 20 02:21:42.822: INFO: (6) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 5.611128ms) Jul 20 02:21:42.822: INFO: (6) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test<... (200; 10.543206ms) Jul 20 02:21:42.834: INFO: (7) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 10.550339ms) Jul 20 02:21:42.834: INFO: (7) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 10.53725ms) Jul 20 02:21:42.834: INFO: (7) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 10.578044ms) Jul 20 02:21:42.834: INFO: (7) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 10.560371ms) Jul 20 02:21:42.834: INFO: (7) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 10.608673ms) Jul 20 02:21:42.834: INFO: (7) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 10.58565ms) Jul 20 02:21:42.834: INFO: (7) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 10.592171ms) Jul 20 02:21:42.834: INFO: (7) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 10.604781ms) Jul 20 02:21:42.837: INFO: (8) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 2.718965ms) Jul 20 02:21:42.837: INFO: (8) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 2.746055ms) Jul 20 02:21:42.837: INFO: (8) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 2.818781ms) Jul 20 02:21:42.837: INFO: (8) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 2.89147ms) Jul 20 02:21:42.838: INFO: (8) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 2.989174ms) Jul 20 02:21:42.838: INFO: (8) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 3.66302ms) Jul 20 02:21:42.838: INFO: (8) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 3.659593ms) Jul 20 02:21:42.838: INFO: (8) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 3.685541ms) Jul 20 02:21:42.838: INFO: (8) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 3.694939ms) Jul 20 02:21:42.838: INFO: (8) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 3.796501ms) Jul 20 02:21:42.838: INFO: (8) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 3.876011ms) Jul 20 02:21:42.838: INFO: (8) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test (200; 4.11561ms) Jul 20 02:21:42.839: INFO: (8) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 4.148118ms) Jul 20 02:21:42.839: INFO: (8) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 4.742017ms) Jul 20 02:21:42.843: INFO: (9) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 4.055915ms) Jul 20 02:21:42.843: INFO: (9) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 4.077637ms) Jul 20 02:21:42.844: INFO: (9) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 4.19839ms) Jul 20 02:21:42.844: INFO: (9) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 4.214375ms) Jul 20 02:21:42.844: INFO: (9) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 4.231924ms) Jul 20 02:21:42.844: INFO: (9) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 4.547568ms) Jul 20 02:21:42.845: INFO: (9) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 5.225229ms) Jul 20 02:21:42.845: INFO: (9) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 5.298461ms) Jul 20 02:21:42.845: INFO: (9) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 5.283113ms) Jul 20 02:21:42.845: INFO: (9) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test<... (200; 5.303256ms) Jul 20 02:21:42.845: INFO: (9) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 5.575484ms) Jul 20 02:21:42.845: INFO: (9) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 5.590202ms) Jul 20 02:21:42.845: INFO: (9) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 5.526218ms) Jul 20 02:21:42.845: INFO: (9) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 5.536652ms) Jul 20 02:21:42.845: INFO: (9) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 5.637448ms) Jul 20 02:21:42.854: INFO: (10) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 8.437173ms) Jul 20 02:21:42.854: INFO: (10) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 8.393107ms) Jul 20 02:21:42.854: INFO: (10) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 8.502146ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 9.669583ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 9.754182ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 10.125297ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 10.021384ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 10.065393ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 10.304613ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 10.341065ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 10.381119ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 10.334946ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 10.426015ms) Jul 20 02:21:42.856: INFO: (10) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 10.425562ms) Jul 20 02:21:42.855: INFO: (10) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: ... (200; 2.60383ms) Jul 20 02:21:42.859: INFO: (11) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 2.59981ms) Jul 20 02:21:42.859: INFO: (11) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test<... (200; 4.118826ms) Jul 20 02:21:42.860: INFO: (11) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 4.232204ms) Jul 20 02:21:42.860: INFO: (11) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 4.189905ms) Jul 20 02:21:42.863: INFO: (12) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 2.271179ms) Jul 20 02:21:42.864: INFO: (12) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: ... (200; 3.949222ms) Jul 20 02:21:42.864: INFO: (12) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 3.867918ms) Jul 20 02:21:42.864: INFO: (12) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 3.55425ms) Jul 20 02:21:42.864: INFO: (12) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 3.718449ms) Jul 20 02:21:42.864: INFO: (12) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 4.054756ms) Jul 20 02:21:42.865: INFO: (12) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 3.606008ms) Jul 20 02:21:42.865: INFO: (12) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 4.042545ms) Jul 20 02:21:42.865: INFO: (12) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 3.918497ms) Jul 20 02:21:42.865: INFO: (12) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 3.702263ms) Jul 20 02:21:42.865: INFO: (12) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 3.740219ms) Jul 20 02:21:42.869: INFO: (13) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 3.946747ms) Jul 20 02:21:42.869: INFO: (13) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 4.063573ms) Jul 20 02:21:42.869: INFO: (13) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 4.062521ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 4.961304ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 4.967006ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 4.978497ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 5.058533ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 5.096007ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 5.190884ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 5.16735ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 5.444692ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 5.463918ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 5.462768ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 5.462815ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 5.538059ms) Jul 20 02:21:42.870: INFO: (13) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test (200; 5.307971ms) Jul 20 02:21:42.876: INFO: (14) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 5.280064ms) Jul 20 02:21:42.876: INFO: (14) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 5.326285ms) Jul 20 02:21:42.876: INFO: (14) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 5.479166ms) Jul 20 02:21:42.876: INFO: (14) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 5.57488ms) Jul 20 02:21:42.876: INFO: (14) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 5.520574ms) Jul 20 02:21:42.876: INFO: (14) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 5.54937ms) Jul 20 02:21:42.876: INFO: (14) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 5.592772ms) Jul 20 02:21:42.876: INFO: (14) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 5.577051ms) Jul 20 02:21:42.876: INFO: (14) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test (200; 63.726949ms) Jul 20 02:21:42.940: INFO: (15) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: ... (200; 65.37186ms) Jul 20 02:21:42.942: INFO: (15) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 65.446861ms) Jul 20 02:21:42.942: INFO: (15) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 65.926012ms) Jul 20 02:21:42.942: INFO: (15) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 65.928915ms) Jul 20 02:21:42.943: INFO: (15) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 66.369063ms) Jul 20 02:21:42.943: INFO: (15) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 66.598173ms) Jul 20 02:21:42.943: INFO: (15) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 66.749107ms) Jul 20 02:21:42.943: INFO: (15) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 66.754575ms) Jul 20 02:21:42.943: INFO: (15) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 66.767159ms) Jul 20 02:21:42.943: INFO: (15) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 66.538132ms) Jul 20 02:21:42.948: INFO: (16) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test<... (200; 4.928741ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 6.517471ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 6.406365ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 6.39396ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 6.574845ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 6.489121ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 6.666263ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 6.627553ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 6.72041ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 6.879972ms) Jul 20 02:21:42.950: INFO: (16) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 6.990452ms) Jul 20 02:21:42.951: INFO: (16) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 7.342254ms) Jul 20 02:21:42.951: INFO: (16) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 7.447806ms) Jul 20 02:21:42.951: INFO: (16) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 7.734398ms) Jul 20 02:21:42.954: INFO: (17) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 3.384354ms) Jul 20 02:21:42.954: INFO: (17) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 3.342579ms) Jul 20 02:21:42.955: INFO: (17) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname1/proxy/: foo (200; 3.864046ms) Jul 20 02:21:42.955: INFO: (17) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: test<... (200; 4.064615ms) Jul 20 02:21:42.955: INFO: (17) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 4.036676ms) Jul 20 02:21:42.955: INFO: (17) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname2/proxy/: tls qux (200; 4.121038ms) Jul 20 02:21:42.955: INFO: (17) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 4.000021ms) Jul 20 02:21:42.955: INFO: (17) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 4.424795ms) Jul 20 02:21:42.955: INFO: (17) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 4.433017ms) Jul 20 02:21:42.956: INFO: (17) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 4.829543ms) Jul 20 02:21:42.956: INFO: (17) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 4.843787ms) Jul 20 02:21:42.956: INFO: (17) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname1/proxy/: foo (200; 4.841188ms) Jul 20 02:21:42.956: INFO: (17) /api/v1/namespaces/proxy-5378/services/proxy-service-wd9kd:portname2/proxy/: bar (200; 5.01653ms) Jul 20 02:21:42.956: INFO: (17) /api/v1/namespaces/proxy-5378/services/http:proxy-service-wd9kd:portname2/proxy/: bar (200; 4.953028ms) Jul 20 02:21:42.956: INFO: (17) /api/v1/namespaces/proxy-5378/services/https:proxy-service-wd9kd:tlsportname1/proxy/: tls baz (200; 5.14346ms) Jul 20 02:21:42.959: INFO: (18) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 2.933348ms) Jul 20 02:21:42.959: INFO: (18) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 2.92085ms) Jul 20 02:21:42.959: INFO: (18) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 3.133742ms) Jul 20 02:21:42.960: INFO: (18) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: ... (200; 7.455817ms) Jul 20 02:21:42.964: INFO: (18) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 7.636318ms) Jul 20 02:21:42.969: INFO: (19) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh/proxy/: test (200; 5.401006ms) Jul 20 02:21:42.969: INFO: (19) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:460/proxy/: tls baz (200; 5.45202ms) Jul 20 02:21:42.970: INFO: (19) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:1080/proxy/: test<... (200; 6.144513ms) Jul 20 02:21:42.970: INFO: (19) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 6.199758ms) Jul 20 02:21:42.970: INFO: (19) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 6.208328ms) Jul 20 02:21:42.970: INFO: (19) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:462/proxy/: tls qux (200; 6.371387ms) Jul 20 02:21:42.970: INFO: (19) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:160/proxy/: foo (200; 6.326916ms) Jul 20 02:21:42.970: INFO: (19) /api/v1/namespaces/proxy-5378/pods/http:proxy-service-wd9kd-r22lh:1080/proxy/: ... (200; 6.320894ms) Jul 20 02:21:42.970: INFO: (19) /api/v1/namespaces/proxy-5378/pods/proxy-service-wd9kd-r22lh:162/proxy/: bar (200; 6.403714ms) Jul 20 02:21:42.971: INFO: (19) /api/v1/namespaces/proxy-5378/pods/https:proxy-service-wd9kd-r22lh:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-926b9c47-a458-4ccd-89b7-97f1bb6ec1ac [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:21:54.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5807" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":294,"completed":108,"skipped":1975,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:21:54.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 02:21:54.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46ed901b-002f-4cf9-8e7f-6d95f9403667" in namespace "projected-7896" to be "Succeeded or Failed" Jul 20 02:21:54.077: INFO: Pod "downwardapi-volume-46ed901b-002f-4cf9-8e7f-6d95f9403667": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627706ms Jul 20 02:21:56.082: INFO: Pod "downwardapi-volume-46ed901b-002f-4cf9-8e7f-6d95f9403667": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008262488s Jul 20 02:21:58.087: INFO: Pod "downwardapi-volume-46ed901b-002f-4cf9-8e7f-6d95f9403667": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012975329s STEP: Saw pod success Jul 20 02:21:58.087: INFO: Pod "downwardapi-volume-46ed901b-002f-4cf9-8e7f-6d95f9403667" satisfied condition "Succeeded or Failed" Jul 20 02:21:58.090: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-46ed901b-002f-4cf9-8e7f-6d95f9403667 container client-container: STEP: delete the pod Jul 20 02:21:58.132: INFO: Waiting for pod downwardapi-volume-46ed901b-002f-4cf9-8e7f-6d95f9403667 to disappear Jul 20 02:21:58.140: INFO: Pod downwardapi-volume-46ed901b-002f-4cf9-8e7f-6d95f9403667 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:21:58.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7896" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":109,"skipped":1980,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:21:58.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:22:15.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4197" for this suite. • [SLOW TEST:17.138 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":294,"completed":110,"skipped":1982,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:22:15.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:22:15.395: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 20 02:22:20.417: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 20 02:22:20.417: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 20 02:22:22.421: INFO: Creating deployment "test-rollover-deployment" Jul 20 02:22:22.450: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 20 02:22:24.461: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 20 02:22:24.465: INFO: Ensure that both replica sets have 1 created replica Jul 20 02:22:24.469: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 20 02:22:24.474: INFO: Updating deployment test-rollover-deployment Jul 20 02:22:24.474: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 20 02:22:26.498: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 20 02:22:26.504: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 20 02:22:26.509: INFO: all replica sets need to contain the pod-template-hash label Jul 20 02:22:26.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808544, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:22:28.517: INFO: all replica sets need to contain the pod-template-hash label Jul 20 02:22:28.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808548, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:22:30.518: INFO: all replica sets need to contain the pod-template-hash label Jul 20 02:22:30.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808548, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:22:32.517: INFO: all replica sets need to contain the pod-template-hash label Jul 20 02:22:32.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808548, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:22:34.517: INFO: all replica sets need to contain the pod-template-hash label Jul 20 02:22:34.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808548, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:22:36.516: INFO: all replica sets need to contain the pod-template-hash label Jul 20 02:22:36.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808548, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:22:38.754: INFO: Jul 20 02:22:38.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808558, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808542, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6f68b9c6f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:22:41.114: INFO: Jul 20 02:22:41.114: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 20 02:22:41.803: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1268 /apis/apps/v1/namespaces/deployment-1268/deployments/test-rollover-deployment 443d4f44-2d62-450d-94f9-5c8f30147082 94581 2 2020-07-20 02:22:22 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-20 02:22:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-20 02:22:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002a6d488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-20 02:22:22 +0000 UTC,LastTransitionTime:2020-07-20 02:22:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6f68b9c6f9" has successfully progressed.,LastUpdateTime:2020-07-20 02:22:40 +0000 UTC,LastTransitionTime:2020-07-20 02:22:22 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 20 02:22:42.055: INFO: New ReplicaSet "test-rollover-deployment-6f68b9c6f9" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-6f68b9c6f9 deployment-1268 /apis/apps/v1/namespaces/deployment-1268/replicasets/test-rollover-deployment-6f68b9c6f9 97d78792-2482-4242-82fa-ddbbeaea25f6 94567 2 2020-07-20 02:22:24 +0000 UTC map[name:rollover-pod pod-template-hash:6f68b9c6f9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 443d4f44-2d62-450d-94f9-5c8f30147082 0xc002a6da47 0xc002a6da48}] [] [{kube-controller-manager Update apps/v1 2020-07-20 02:22:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"443d4f44-2d62-450d-94f9-5c8f30147082\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6f68b9c6f9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6f68b9c6f9] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002a6dad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:22:42.055: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 20 02:22:42.055: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1268 /apis/apps/v1/namespaces/deployment-1268/replicasets/test-rollover-controller ff2e5a9e-7151-4458-ac51-63044649f0c8 94580 2 2020-07-20 02:22:15 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 443d4f44-2d62-450d-94f9-5c8f30147082 0xc002a6d937 0xc002a6d938}] [] [{e2e.test Update apps/v1 2020-07-20 02:22:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-20 02:22:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"443d4f44-2d62-450d-94f9-5c8f30147082\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002a6d9d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:22:42.055: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-1268 /apis/apps/v1/namespaces/deployment-1268/replicasets/test-rollover-deployment-78bc8b888c 870153fa-6c74-475e-be6c-194822886cb9 94520 2 2020-07-20 02:22:22 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 443d4f44-2d62-450d-94f9-5c8f30147082 0xc002a6db47 0xc002a6db48}] [] [{kube-controller-manager Update apps/v1 2020-07-20 02:22:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"443d4f44-2d62-450d-94f9-5c8f30147082\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002a6dbf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:22:42.058: INFO: Pod "test-rollover-deployment-6f68b9c6f9-cd6rq" is available: &Pod{ObjectMeta:{test-rollover-deployment-6f68b9c6f9-cd6rq test-rollover-deployment-6f68b9c6f9- deployment-1268 /api/v1/namespaces/deployment-1268/pods/test-rollover-deployment-6f68b9c6f9-cd6rq 30c45111-fef7-45a5-a8c8-7cf30a1f7548 94537 0 2020-07-20 02:22:24 +0000 UTC map[name:rollover-pod pod-template-hash:6f68b9c6f9] map[] [{apps/v1 ReplicaSet test-rollover-deployment-6f68b9c6f9 97d78792-2482-4242-82fa-ddbbeaea25f6 0xc00219c3f7 0xc00219c3f8}] [] [{kube-controller-manager Update v1 2020-07-20 02:22:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d78792-2482-4242-82fa-ddbbeaea25f6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:22:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.189\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xn855,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xn855,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xn855,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:22:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:22:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:22:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:22:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.189,StartTime:2020-07-20 02:22:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:22:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://230578da0f5293e8e85fbb938768dae02151fbdc587c06c561da7b437c952ffc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:22:42.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1268" for this suite. • [SLOW TEST:26.754 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":294,"completed":111,"skipped":1989,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:22:42.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:22:59.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2088" for this suite. STEP: Destroying namespace "nsdeletetest-6405" for this suite. Jul 20 02:22:59.172: INFO: Namespace nsdeletetest-6405 was already deleted STEP: Destroying namespace "nsdeletetest-758" for this suite. • [SLOW TEST:17.110 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":294,"completed":112,"skipped":1993,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:22:59.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Jul 20 02:22:59.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f -' Jul 20 02:23:02.681: INFO: stderr: "" Jul 20 02:23:02.681: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jul 20 02:23:02.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config diff -f -' Jul 20 02:23:03.213: INFO: rc: 1 Jul 20 02:23:03.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete -f -' Jul 20 02:23:03.324: INFO: stderr: "" Jul 20 02:23:03.324: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:23:03.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-214" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":294,"completed":113,"skipped":1995,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:23:03.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jul 20 02:23:03.419: INFO: >>> kubeConfig: /root/.kube/config Jul 20 02:23:06.380: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:23:17.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1213" for this suite. • [SLOW TEST:13.810 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":294,"completed":114,"skipped":1997,"failed":0} [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:23:17.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jul 20 02:23:23.342: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7123 PodName:pod-sharedvolume-33fdda81-f995-4963-bbea-8a4e4d6bc40b ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:23:23.342: INFO: >>> kubeConfig: /root/.kube/config I0720 02:23:23.377284 8 log.go:181] (0xc0026f7a20) (0xc0027925a0) Create stream I0720 02:23:23.377315 8 log.go:181] (0xc0026f7a20) (0xc0027925a0) Stream added, broadcasting: 1 I0720 02:23:23.380492 8 log.go:181] (0xc0026f7a20) Reply frame received for 1 I0720 02:23:23.380556 8 log.go:181] (0xc0026f7a20) (0xc002be23c0) Create stream I0720 02:23:23.380577 8 log.go:181] (0xc0026f7a20) (0xc002be23c0) Stream added, broadcasting: 3 I0720 02:23:23.383570 8 log.go:181] (0xc0026f7a20) Reply frame received for 3 I0720 02:23:23.383624 8 log.go:181] (0xc0026f7a20) (0xc0027926e0) Create stream I0720 02:23:23.383647 8 log.go:181] (0xc0026f7a20) (0xc0027926e0) Stream added, broadcasting: 5 I0720 02:23:23.384545 8 log.go:181] (0xc0026f7a20) Reply frame received for 5 I0720 02:23:23.454400 8 log.go:181] (0xc0026f7a20) Data frame received for 5 I0720 02:23:23.454455 8 log.go:181] (0xc0026f7a20) Data frame received for 3 I0720 02:23:23.454497 8 log.go:181] (0xc002be23c0) (3) Data frame handling I0720 02:23:23.454509 8 log.go:181] (0xc002be23c0) (3) Data frame sent I0720 02:23:23.454521 8 log.go:181] (0xc0026f7a20) Data frame received for 3 I0720 02:23:23.454535 8 log.go:181] (0xc002be23c0) (3) Data frame handling I0720 02:23:23.454587 8 log.go:181] (0xc0027926e0) (5) Data frame handling I0720 02:23:23.456220 8 log.go:181] (0xc0026f7a20) Data frame received for 1 I0720 02:23:23.456245 8 log.go:181] (0xc0027925a0) (1) Data frame handling I0720 02:23:23.456271 8 log.go:181] (0xc0027925a0) (1) Data frame sent I0720 02:23:23.456327 8 log.go:181] (0xc0026f7a20) (0xc0027925a0) Stream removed, broadcasting: 1 I0720 02:23:23.456388 8 log.go:181] (0xc0026f7a20) Go away received I0720 02:23:23.456444 8 log.go:181] (0xc0026f7a20) (0xc0027925a0) Stream removed, broadcasting: 1 I0720 02:23:23.456485 8 log.go:181] (0xc0026f7a20) (0xc002be23c0) Stream removed, broadcasting: 3 I0720 02:23:23.456505 8 log.go:181] (0xc0026f7a20) (0xc0027926e0) Stream removed, broadcasting: 5 Jul 20 02:23:23.456: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:23:23.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7123" for this suite. • [SLOW TEST:6.291 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":294,"completed":115,"skipped":1997,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:23:23.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Jul 20 02:23:24.490: INFO: starting watch STEP: patching STEP: updating Jul 20 02:23:24.509: INFO: waiting for watch events with expected annotations Jul 20 02:23:24.509: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:23:24.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-7119" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":294,"completed":116,"skipped":2008,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:23:24.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jul 20 02:23:24.810: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:23:42.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1686" for this suite. • [SLOW TEST:17.408 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":294,"completed":117,"skipped":2026,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:23:42.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 20 02:23:42.152: INFO: Waiting up to 5m0s for pod "downward-api-67831533-f298-4e82-8a66-a0e728fcacb0" in namespace "downward-api-758" to be "Succeeded or Failed" Jul 20 02:23:42.224: INFO: Pod "downward-api-67831533-f298-4e82-8a66-a0e728fcacb0": Phase="Pending", Reason="", readiness=false. Elapsed: 71.340023ms Jul 20 02:23:44.228: INFO: Pod "downward-api-67831533-f298-4e82-8a66-a0e728fcacb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075749964s Jul 20 02:23:46.232: INFO: Pod "downward-api-67831533-f298-4e82-8a66-a0e728fcacb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079362085s Jul 20 02:23:48.236: INFO: Pod "downward-api-67831533-f298-4e82-8a66-a0e728fcacb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083756173s STEP: Saw pod success Jul 20 02:23:48.236: INFO: Pod "downward-api-67831533-f298-4e82-8a66-a0e728fcacb0" satisfied condition "Succeeded or Failed" Jul 20 02:23:48.239: INFO: Trying to get logs from node latest-worker2 pod downward-api-67831533-f298-4e82-8a66-a0e728fcacb0 container dapi-container: STEP: delete the pod Jul 20 02:23:48.292: INFO: Waiting for pod downward-api-67831533-f298-4e82-8a66-a0e728fcacb0 to disappear Jul 20 02:23:48.320: INFO: Pod downward-api-67831533-f298-4e82-8a66-a0e728fcacb0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:23:48.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-758" for this suite. • [SLOW TEST:6.247 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":294,"completed":118,"skipped":2049,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:23:48.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 02:23:49.898: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 02:23:52.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808629, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808629, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808629, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808629, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:23:54.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808629, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808629, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808629, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808629, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:23:57.073: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:24:07.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4929" for this suite. STEP: Destroying namespace "webhook-4929-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.658 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":294,"completed":119,"skipped":2049,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:24:07.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Jul 20 02:24:08.562: INFO: Waiting up to 1m0s for all nodes to be ready Jul 20 02:25:08.583: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:25:08.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jul 20 02:25:14.904: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:25:32.123: INFO: pods created so far: [1 1 1] Jul 20 02:25:32.123: INFO: length of pods created so far: 3 Jul 20 02:25:48.133: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:25:55.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-3095" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:25:55.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-661" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:107.988 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":294,"completed":120,"skipped":2071,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:25:55.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 20 02:25:58.039: INFO: Pod name wrapped-volume-race-05196e9b-f260-4a92-8d2b-9814273d735f: Found 0 pods out of 5 Jul 20 02:26:03.092: INFO: Pod name wrapped-volume-race-05196e9b-f260-4a92-8d2b-9814273d735f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-05196e9b-f260-4a92-8d2b-9814273d735f in namespace emptydir-wrapper-6852, will wait for the garbage collector to delete the pods Jul 20 02:26:17.936: INFO: Deleting ReplicationController wrapped-volume-race-05196e9b-f260-4a92-8d2b-9814273d735f took: 8.235185ms Jul 20 02:26:18.536: INFO: Terminating ReplicationController wrapped-volume-race-05196e9b-f260-4a92-8d2b-9814273d735f pods took: 600.19314ms STEP: Creating RC which spawns configmap-volume pods Jul 20 02:26:34.074: INFO: Pod name wrapped-volume-race-a95ddcd6-4208-4b37-a4be-6ca10c744827: Found 0 pods out of 5 Jul 20 02:26:39.083: INFO: Pod name wrapped-volume-race-a95ddcd6-4208-4b37-a4be-6ca10c744827: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a95ddcd6-4208-4b37-a4be-6ca10c744827 in namespace emptydir-wrapper-6852, will wait for the garbage collector to delete the pods Jul 20 02:26:55.164: INFO: Deleting ReplicationController wrapped-volume-race-a95ddcd6-4208-4b37-a4be-6ca10c744827 took: 5.488931ms Jul 20 02:26:55.664: INFO: Terminating ReplicationController wrapped-volume-race-a95ddcd6-4208-4b37-a4be-6ca10c744827 pods took: 500.320351ms STEP: Creating RC which spawns configmap-volume pods Jul 20 02:27:04.014: INFO: Pod name wrapped-volume-race-158403c6-d67c-470e-b7ef-a8e2b6e8668a: Found 0 pods out of 5 Jul 20 02:27:09.044: INFO: Pod name wrapped-volume-race-158403c6-d67c-470e-b7ef-a8e2b6e8668a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-158403c6-d67c-470e-b7ef-a8e2b6e8668a in namespace emptydir-wrapper-6852, will wait for the garbage collector to delete the pods Jul 20 02:27:25.134: INFO: Deleting ReplicationController wrapped-volume-race-158403c6-d67c-470e-b7ef-a8e2b6e8668a took: 7.32921ms Jul 20 02:27:25.635: INFO: Terminating ReplicationController wrapped-volume-race-158403c6-d67c-470e-b7ef-a8e2b6e8668a pods took: 500.24469ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:27:44.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6852" for this suite. • [SLOW TEST:108.495 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":294,"completed":121,"skipped":2080,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:27:44.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 02:27:45.168: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 02:27:47.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808865, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808865, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808865, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808865, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:27:49.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808865, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808865, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808865, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730808865, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:27:52.272: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:27:52.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5244" for this suite. STEP: Destroying namespace "webhook-5244-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.129 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":294,"completed":122,"skipped":2108,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:27:52.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:27:52.733: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jul 20 02:27:54.156: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:27:54.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6395" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":294,"completed":123,"skipped":2112,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:27:54.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 20 02:28:03.291: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 02:28:03.346: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 02:28:05.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 02:28:05.358: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 02:28:07.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 02:28:07.350: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 02:28:09.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 02:28:09.545: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 02:28:11.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 02:28:11.354: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 02:28:13.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 02:28:13.350: INFO: Pod pod-with-prestop-exec-hook still exists Jul 20 02:28:15.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 20 02:28:15.350: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:28:15.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5542" for this suite. • [SLOW TEST:21.067 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":294,"completed":124,"skipped":2124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:28:15.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 02:28:19.612: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:28:19.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2039" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":125,"skipped":2148,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:28:19.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Jul 20 02:28:19.908: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jul 20 02:28:19.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4455' Jul 20 02:28:20.252: INFO: stderr: "" Jul 20 02:28:20.252: INFO: stdout: "service/agnhost-replica created\n" Jul 20 02:28:20.252: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jul 20 02:28:20.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4455' Jul 20 02:28:20.612: INFO: stderr: "" Jul 20 02:28:20.612: INFO: stdout: "service/agnhost-primary created\n" Jul 20 02:28:20.613: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 20 02:28:20.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4455' Jul 20 02:28:21.097: INFO: stderr: "" Jul 20 02:28:21.097: INFO: stdout: "service/frontend created\n" Jul 20 02:28:21.098: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jul 20 02:28:21.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4455' Jul 20 02:28:21.430: INFO: stderr: "" Jul 20 02:28:21.430: INFO: stdout: "deployment.apps/frontend created\n" Jul 20 02:28:21.430: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 20 02:28:21.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4455' Jul 20 02:28:21.867: INFO: stderr: "" Jul 20 02:28:21.867: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jul 20 02:28:21.867: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 20 02:28:21.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4455' Jul 20 02:28:22.176: INFO: stderr: "" Jul 20 02:28:22.176: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Jul 20 02:28:22.176: INFO: Waiting for all frontend pods to be Running. Jul 20 02:28:32.227: INFO: Waiting for frontend to serve content. Jul 20 02:28:32.237: INFO: Trying to add a new entry to the guestbook. Jul 20 02:28:32.247: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 20 02:28:32.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4455' Jul 20 02:28:32.392: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 02:28:32.392: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Jul 20 02:28:32.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4455' Jul 20 02:28:32.585: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 02:28:32.585: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jul 20 02:28:32.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4455' Jul 20 02:28:32.755: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 02:28:32.755: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 20 02:28:32.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4455' Jul 20 02:28:32.863: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 02:28:32.863: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 20 02:28:32.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4455' Jul 20 02:28:32.969: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 02:28:32.970: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jul 20 02:28:32.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4455' Jul 20 02:28:33.363: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 02:28:33.363: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:28:33.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4455" for this suite. • [SLOW TEST:13.728 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:350 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":294,"completed":126,"skipped":2158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:28:33.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 02:28:41.095: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:28:41.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1751" for this suite. • [SLOW TEST:7.679 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":294,"completed":127,"skipped":2184,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:28:41.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:28:41.578: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 20 02:28:44.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3852 create -f -' Jul 20 02:28:50.097: INFO: stderr: "" Jul 20 02:28:50.097: INFO: stdout: "e2e-test-crd-publish-openapi-1099-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 20 02:28:50.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3852 delete e2e-test-crd-publish-openapi-1099-crds test-cr' Jul 20 02:28:50.225: INFO: stderr: "" Jul 20 02:28:50.225: INFO: stdout: "e2e-test-crd-publish-openapi-1099-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jul 20 02:28:50.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3852 apply -f -' Jul 20 02:28:50.513: INFO: stderr: "" Jul 20 02:28:50.513: INFO: stdout: "e2e-test-crd-publish-openapi-1099-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 20 02:28:50.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3852 delete e2e-test-crd-publish-openapi-1099-crds test-cr' Jul 20 02:28:50.630: INFO: stderr: "" Jul 20 02:28:50.630: INFO: stdout: "e2e-test-crd-publish-openapi-1099-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 20 02:28:50.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1099-crds' Jul 20 02:28:50.900: INFO: stderr: "" Jul 20 02:28:50.901: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1099-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:28:53.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3852" for this suite. • [SLOW TEST:12.621 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":294,"completed":128,"skipped":2195,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:28:53.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-1d825e7d-fd89-4782-bfe6-7937a864d904 STEP: Creating a pod to test consume configMaps Jul 20 02:28:54.000: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9bb23ecb-9153-48d8-9bed-3e4079839125" in namespace "projected-2908" to be "Succeeded or Failed" Jul 20 02:28:54.019: INFO: Pod "pod-projected-configmaps-9bb23ecb-9153-48d8-9bed-3e4079839125": Phase="Pending", Reason="", readiness=false. Elapsed: 18.865391ms Jul 20 02:28:56.023: INFO: Pod "pod-projected-configmaps-9bb23ecb-9153-48d8-9bed-3e4079839125": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023107961s Jul 20 02:28:58.102: INFO: Pod "pod-projected-configmaps-9bb23ecb-9153-48d8-9bed-3e4079839125": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101611792s Jul 20 02:29:00.105: INFO: Pod "pod-projected-configmaps-9bb23ecb-9153-48d8-9bed-3e4079839125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105001546s STEP: Saw pod success Jul 20 02:29:00.105: INFO: Pod "pod-projected-configmaps-9bb23ecb-9153-48d8-9bed-3e4079839125" satisfied condition "Succeeded or Failed" Jul 20 02:29:00.108: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-9bb23ecb-9153-48d8-9bed-3e4079839125 container projected-configmap-volume-test: STEP: delete the pod Jul 20 02:29:00.146: INFO: Waiting for pod pod-projected-configmaps-9bb23ecb-9153-48d8-9bed-3e4079839125 to disappear Jul 20 02:29:00.158: INFO: Pod pod-projected-configmaps-9bb23ecb-9153-48d8-9bed-3e4079839125 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:29:00.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2908" for this suite. • [SLOW TEST:6.296 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":129,"skipped":2202,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:29:00.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jul 20 02:29:04.916: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-7461 PodName:var-expansion-5bdbade8-53a6-475d-884a-4b4bb4207260 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:29:04.916: INFO: >>> kubeConfig: /root/.kube/config I0720 02:29:04.957392 8 log.go:181] (0xc00596b810) (0xc00128ec80) Create stream I0720 02:29:04.957423 8 log.go:181] (0xc00596b810) (0xc00128ec80) Stream added, broadcasting: 1 I0720 02:29:04.959424 8 log.go:181] (0xc00596b810) Reply frame received for 1 I0720 02:29:04.959471 8 log.go:181] (0xc00596b810) (0xc0010d0500) Create stream I0720 02:29:04.959487 8 log.go:181] (0xc00596b810) (0xc0010d0500) Stream added, broadcasting: 3 I0720 02:29:04.960652 8 log.go:181] (0xc00596b810) Reply frame received for 3 I0720 02:29:04.960706 8 log.go:181] (0xc00596b810) (0xc0013bc140) Create stream I0720 02:29:04.960804 8 log.go:181] (0xc00596b810) (0xc0013bc140) Stream added, broadcasting: 5 I0720 02:29:04.961781 8 log.go:181] (0xc00596b810) Reply frame received for 5 I0720 02:29:05.033319 8 log.go:181] (0xc00596b810) Data frame received for 5 I0720 02:29:05.033352 8 log.go:181] (0xc0013bc140) (5) Data frame handling I0720 02:29:05.033375 8 log.go:181] (0xc00596b810) Data frame received for 3 I0720 02:29:05.033387 8 log.go:181] (0xc0010d0500) (3) Data frame handling I0720 02:29:05.034722 8 log.go:181] (0xc00596b810) Data frame received for 1 I0720 02:29:05.034752 8 log.go:181] (0xc00128ec80) (1) Data frame handling I0720 02:29:05.034779 8 log.go:181] (0xc00128ec80) (1) Data frame sent I0720 02:29:05.034806 8 log.go:181] (0xc00596b810) (0xc00128ec80) Stream removed, broadcasting: 1 I0720 02:29:05.034831 8 log.go:181] (0xc00596b810) Go away received I0720 02:29:05.034926 8 log.go:181] (0xc00596b810) (0xc00128ec80) Stream removed, broadcasting: 1 I0720 02:29:05.034963 8 log.go:181] (0xc00596b810) (0xc0010d0500) Stream removed, broadcasting: 3 I0720 02:29:05.034986 8 log.go:181] (0xc00596b810) (0xc0013bc140) Stream removed, broadcasting: 5 STEP: test for file in mounted path Jul 20 02:29:05.038: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-7461 PodName:var-expansion-5bdbade8-53a6-475d-884a-4b4bb4207260 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:29:05.038: INFO: >>> kubeConfig: /root/.kube/config I0720 02:29:05.070015 8 log.go:181] (0xc00596bd90) (0xc00128f4a0) Create stream I0720 02:29:05.070057 8 log.go:181] (0xc00596bd90) (0xc00128f4a0) Stream added, broadcasting: 1 I0720 02:29:05.073454 8 log.go:181] (0xc00596bd90) Reply frame received for 1 I0720 02:29:05.073506 8 log.go:181] (0xc00596bd90) (0xc002154500) Create stream I0720 02:29:05.073537 8 log.go:181] (0xc00596bd90) (0xc002154500) Stream added, broadcasting: 3 I0720 02:29:05.075200 8 log.go:181] (0xc00596bd90) Reply frame received for 3 I0720 02:29:05.075223 8 log.go:181] (0xc00596bd90) (0xc001e088c0) Create stream I0720 02:29:05.075233 8 log.go:181] (0xc00596bd90) (0xc001e088c0) Stream added, broadcasting: 5 I0720 02:29:05.076425 8 log.go:181] (0xc00596bd90) Reply frame received for 5 I0720 02:29:05.133106 8 log.go:181] (0xc00596bd90) Data frame received for 5 I0720 02:29:05.133144 8 log.go:181] (0xc001e088c0) (5) Data frame handling I0720 02:29:05.133169 8 log.go:181] (0xc00596bd90) Data frame received for 3 I0720 02:29:05.133183 8 log.go:181] (0xc002154500) (3) Data frame handling I0720 02:29:05.134458 8 log.go:181] (0xc00596bd90) Data frame received for 1 I0720 02:29:05.134483 8 log.go:181] (0xc00128f4a0) (1) Data frame handling I0720 02:29:05.134508 8 log.go:181] (0xc00128f4a0) (1) Data frame sent I0720 02:29:05.134525 8 log.go:181] (0xc00596bd90) (0xc00128f4a0) Stream removed, broadcasting: 1 I0720 02:29:05.134541 8 log.go:181] (0xc00596bd90) Go away received I0720 02:29:05.134735 8 log.go:181] (0xc00596bd90) (0xc00128f4a0) Stream removed, broadcasting: 1 I0720 02:29:05.134774 8 log.go:181] (0xc00596bd90) (0xc002154500) Stream removed, broadcasting: 3 I0720 02:29:05.134794 8 log.go:181] (0xc00596bd90) (0xc001e088c0) Stream removed, broadcasting: 5 STEP: updating the annotation value Jul 20 02:29:05.645: INFO: Successfully updated pod "var-expansion-5bdbade8-53a6-475d-884a-4b4bb4207260" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jul 20 02:29:05.677: INFO: Deleting pod "var-expansion-5bdbade8-53a6-475d-884a-4b4bb4207260" in namespace "var-expansion-7461" Jul 20 02:29:05.682: INFO: Wait up to 5m0s for pod "var-expansion-5bdbade8-53a6-475d-884a-4b4bb4207260" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:29:45.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7461" for this suite. • [SLOW TEST:45.547 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":294,"completed":130,"skipped":2210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:29:45.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-70ba9822-dbfd-4d13-b9c6-9cae59909308 in namespace container-probe-7978 Jul 20 02:29:49.841: INFO: Started pod liveness-70ba9822-dbfd-4d13-b9c6-9cae59909308 in namespace container-probe-7978 STEP: checking the pod's current state and verifying that restartCount is present Jul 20 02:29:49.843: INFO: Initial restart count of pod liveness-70ba9822-dbfd-4d13-b9c6-9cae59909308 is 0 Jul 20 02:30:11.905: INFO: Restart count of pod container-probe-7978/liveness-70ba9822-dbfd-4d13-b9c6-9cae59909308 is now 1 (22.061241676s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:30:11.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7978" for this suite. • [SLOW TEST:26.249 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":131,"skipped":2247,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:30:11.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4264.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4264.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4264.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4264.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4264.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4264.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 02:30:18.119: INFO: DNS probes using dns-4264/dns-test-39d39cf5-fad1-4d37-a9c5-7a116328874a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:30:18.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4264" for this suite. • [SLOW TEST:6.239 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":294,"completed":132,"skipped":2249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:30:18.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 20 02:30:18.254: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:30:24.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2617" for this suite. • [SLOW TEST:6.623 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":294,"completed":133,"skipped":2286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:30:24.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8631 Jul 20 02:30:29.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8631 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jul 20 02:30:29.353: INFO: stderr: "I0720 02:30:29.277238 1896 log.go:181] (0xc0006b7b80) (0xc000b25540) Create stream\nI0720 02:30:29.277289 1896 log.go:181] (0xc0006b7b80) (0xc000b25540) Stream added, broadcasting: 1\nI0720 02:30:29.279907 1896 log.go:181] (0xc0006b7b80) Reply frame received for 1\nI0720 02:30:29.279953 1896 log.go:181] (0xc0006b7b80) (0xc000880b40) Create stream\nI0720 02:30:29.279969 1896 log.go:181] (0xc0006b7b80) (0xc000880b40) Stream added, broadcasting: 3\nI0720 02:30:29.281122 1896 log.go:181] (0xc0006b7b80) Reply frame received for 3\nI0720 02:30:29.281147 1896 log.go:181] (0xc0006b7b80) (0xc0004e5c20) Create stream\nI0720 02:30:29.281164 1896 log.go:181] (0xc0006b7b80) (0xc0004e5c20) Stream added, broadcasting: 5\nI0720 02:30:29.282090 1896 log.go:181] (0xc0006b7b80) Reply frame received for 5\nI0720 02:30:29.343471 1896 log.go:181] (0xc0006b7b80) Data frame received for 5\nI0720 02:30:29.343496 1896 log.go:181] (0xc0004e5c20) (5) Data frame handling\nI0720 02:30:29.343514 1896 log.go:181] (0xc0004e5c20) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0720 02:30:29.346933 1896 log.go:181] (0xc0006b7b80) Data frame received for 3\nI0720 02:30:29.346953 1896 log.go:181] (0xc000880b40) (3) Data frame handling\nI0720 02:30:29.346987 1896 log.go:181] (0xc000880b40) (3) Data frame sent\nI0720 02:30:29.347481 1896 log.go:181] (0xc0006b7b80) Data frame received for 5\nI0720 02:30:29.347503 1896 log.go:181] (0xc0004e5c20) (5) Data frame handling\nI0720 02:30:29.347664 1896 log.go:181] (0xc0006b7b80) Data frame received for 3\nI0720 02:30:29.347686 1896 log.go:181] (0xc000880b40) (3) Data frame handling\nI0720 02:30:29.349152 1896 log.go:181] (0xc0006b7b80) Data frame received for 1\nI0720 02:30:29.349176 1896 log.go:181] (0xc000b25540) (1) Data frame handling\nI0720 02:30:29.349189 1896 log.go:181] (0xc000b25540) (1) Data frame sent\nI0720 02:30:29.349202 1896 log.go:181] (0xc0006b7b80) (0xc000b25540) Stream removed, broadcasting: 1\nI0720 02:30:29.349258 1896 log.go:181] (0xc0006b7b80) Go away received\nI0720 02:30:29.349592 1896 log.go:181] (0xc0006b7b80) (0xc000b25540) Stream removed, broadcasting: 1\nI0720 02:30:29.349604 1896 log.go:181] (0xc0006b7b80) (0xc000880b40) Stream removed, broadcasting: 3\nI0720 02:30:29.349609 1896 log.go:181] (0xc0006b7b80) (0xc0004e5c20) Stream removed, broadcasting: 5\n" Jul 20 02:30:29.353: INFO: stdout: "iptables" Jul 20 02:30:29.353: INFO: proxyMode: iptables Jul 20 02:30:29.358: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:30:29.420: INFO: Pod kube-proxy-mode-detector still exists Jul 20 02:30:31.420: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:30:31.425: INFO: Pod kube-proxy-mode-detector still exists Jul 20 02:30:33.420: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:30:33.424: INFO: Pod kube-proxy-mode-detector still exists Jul 20 02:30:35.420: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:30:35.425: INFO: Pod kube-proxy-mode-detector still exists Jul 20 02:30:37.420: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:30:37.425: INFO: Pod kube-proxy-mode-detector still exists Jul 20 02:30:39.420: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:30:39.424: INFO: Pod kube-proxy-mode-detector still exists Jul 20 02:30:41.420: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:30:41.425: INFO: Pod kube-proxy-mode-detector still exists Jul 20 02:30:43.420: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:30:43.424: INFO: Pod kube-proxy-mode-detector still exists Jul 20 02:30:45.420: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:30:45.423: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-8631 STEP: creating replication controller affinity-nodeport-timeout in namespace services-8631 I0720 02:30:45.523551 8 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8631, replica count: 3 I0720 02:30:48.573917 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:30:51.574191 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 02:30:51.584: INFO: Creating new exec pod Jul 20 02:30:56.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpod-affinityvr27j -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jul 20 02:30:56.860: INFO: stderr: "I0720 02:30:56.777544 1914 log.go:181] (0xc0007b2000) (0xc000988140) Create stream\nI0720 02:30:56.777604 1914 log.go:181] (0xc0007b2000) (0xc000988140) Stream added, broadcasting: 1\nI0720 02:30:56.779414 1914 log.go:181] (0xc0007b2000) Reply frame received for 1\nI0720 02:30:56.779469 1914 log.go:181] (0xc0007b2000) (0xc000916280) Create stream\nI0720 02:30:56.779485 1914 log.go:181] (0xc0007b2000) (0xc000916280) Stream added, broadcasting: 3\nI0720 02:30:56.780350 1914 log.go:181] (0xc0007b2000) Reply frame received for 3\nI0720 02:30:56.780381 1914 log.go:181] (0xc0007b2000) (0xc000916b40) Create stream\nI0720 02:30:56.780390 1914 log.go:181] (0xc0007b2000) (0xc000916b40) Stream added, broadcasting: 5\nI0720 02:30:56.781475 1914 log.go:181] (0xc0007b2000) Reply frame received for 5\nI0720 02:30:56.851775 1914 log.go:181] (0xc0007b2000) Data frame received for 5\nI0720 02:30:56.851804 1914 log.go:181] (0xc000916b40) (5) Data frame handling\nI0720 02:30:56.851819 1914 log.go:181] (0xc000916b40) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0720 02:30:56.852215 1914 log.go:181] (0xc0007b2000) Data frame received for 5\nI0720 02:30:56.852239 1914 log.go:181] (0xc000916b40) (5) Data frame handling\nI0720 02:30:56.852256 1914 log.go:181] (0xc000916b40) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0720 02:30:56.852497 1914 log.go:181] (0xc0007b2000) Data frame received for 5\nI0720 02:30:56.852529 1914 log.go:181] (0xc000916b40) (5) Data frame handling\nI0720 02:30:56.852661 1914 log.go:181] (0xc0007b2000) Data frame received for 3\nI0720 02:30:56.852679 1914 log.go:181] (0xc000916280) (3) Data frame handling\nI0720 02:30:56.855177 1914 log.go:181] (0xc0007b2000) Data frame received for 1\nI0720 02:30:56.855196 1914 log.go:181] (0xc000988140) (1) Data frame handling\nI0720 02:30:56.855212 1914 log.go:181] (0xc000988140) (1) Data frame sent\nI0720 02:30:56.855228 1914 log.go:181] (0xc0007b2000) (0xc000988140) Stream removed, broadcasting: 1\nI0720 02:30:56.855250 1914 log.go:181] (0xc0007b2000) Go away received\nI0720 02:30:56.855689 1914 log.go:181] (0xc0007b2000) (0xc000988140) Stream removed, broadcasting: 1\nI0720 02:30:56.855712 1914 log.go:181] (0xc0007b2000) (0xc000916280) Stream removed, broadcasting: 3\nI0720 02:30:56.855725 1914 log.go:181] (0xc0007b2000) (0xc000916b40) Stream removed, broadcasting: 5\n" Jul 20 02:30:56.860: INFO: stdout: "" Jul 20 02:30:56.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpod-affinityvr27j -- /bin/sh -x -c nc -zv -t -w 2 10.109.208.171 80' Jul 20 02:30:57.065: INFO: stderr: "I0720 02:30:56.982710 1929 log.go:181] (0xc0007d3340) (0xc000aeb4a0) Create stream\nI0720 02:30:56.982753 1929 log.go:181] (0xc0007d3340) (0xc000aeb4a0) Stream added, broadcasting: 1\nI0720 02:30:56.984922 1929 log.go:181] (0xc0007d3340) Reply frame received for 1\nI0720 02:30:56.984953 1929 log.go:181] (0xc0007d3340) (0xc000e040a0) Create stream\nI0720 02:30:56.984967 1929 log.go:181] (0xc0007d3340) (0xc000e040a0) Stream added, broadcasting: 3\nI0720 02:30:56.986055 1929 log.go:181] (0xc0007d3340) Reply frame received for 3\nI0720 02:30:56.986102 1929 log.go:181] (0xc0007d3340) (0xc000f20140) Create stream\nI0720 02:30:56.986129 1929 log.go:181] (0xc0007d3340) (0xc000f20140) Stream added, broadcasting: 5\nI0720 02:30:56.987187 1929 log.go:181] (0xc0007d3340) Reply frame received for 5\nI0720 02:30:57.056906 1929 log.go:181] (0xc0007d3340) Data frame received for 3\nI0720 02:30:57.056982 1929 log.go:181] (0xc000e040a0) (3) Data frame handling\nI0720 02:30:57.057040 1929 log.go:181] (0xc0007d3340) Data frame received for 5\nI0720 02:30:57.057068 1929 log.go:181] (0xc000f20140) (5) Data frame handling\nI0720 02:30:57.057102 1929 log.go:181] (0xc000f20140) (5) Data frame sent\nI0720 02:30:57.057122 1929 log.go:181] (0xc0007d3340) Data frame received for 5\nI0720 02:30:57.057132 1929 log.go:181] (0xc000f20140) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.208.171 80\nConnection to 10.109.208.171 80 port [tcp/http] succeeded!\nI0720 02:30:57.058897 1929 log.go:181] (0xc0007d3340) Data frame received for 1\nI0720 02:30:57.058930 1929 log.go:181] (0xc000aeb4a0) (1) Data frame handling\nI0720 02:30:57.058948 1929 log.go:181] (0xc000aeb4a0) (1) Data frame sent\nI0720 02:30:57.058962 1929 log.go:181] (0xc0007d3340) (0xc000aeb4a0) Stream removed, broadcasting: 1\nI0720 02:30:57.059146 1929 log.go:181] (0xc0007d3340) Go away received\nI0720 02:30:57.059442 1929 log.go:181] (0xc0007d3340) (0xc000aeb4a0) Stream removed, broadcasting: 1\nI0720 02:30:57.059461 1929 log.go:181] (0xc0007d3340) (0xc000e040a0) Stream removed, broadcasting: 3\nI0720 02:30:57.059472 1929 log.go:181] (0xc0007d3340) (0xc000f20140) Stream removed, broadcasting: 5\n" Jul 20 02:30:57.065: INFO: stdout: "" Jul 20 02:30:57.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpod-affinityvr27j -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32630' Jul 20 02:30:57.283: INFO: stderr: "I0720 02:30:57.199395 1947 log.go:181] (0xc000ac8fd0) (0xc00043aaa0) Create stream\nI0720 02:30:57.199447 1947 log.go:181] (0xc000ac8fd0) (0xc00043aaa0) Stream added, broadcasting: 1\nI0720 02:30:57.203918 1947 log.go:181] (0xc000ac8fd0) Reply frame received for 1\nI0720 02:30:57.203983 1947 log.go:181] (0xc000ac8fd0) (0xc00068cc80) Create stream\nI0720 02:30:57.204007 1947 log.go:181] (0xc000ac8fd0) (0xc00068cc80) Stream added, broadcasting: 3\nI0720 02:30:57.204875 1947 log.go:181] (0xc000ac8fd0) Reply frame received for 3\nI0720 02:30:57.204921 1947 log.go:181] (0xc000ac8fd0) (0xc000512320) Create stream\nI0720 02:30:57.204939 1947 log.go:181] (0xc000ac8fd0) (0xc000512320) Stream added, broadcasting: 5\nI0720 02:30:57.205689 1947 log.go:181] (0xc000ac8fd0) Reply frame received for 5\nI0720 02:30:57.272568 1947 log.go:181] (0xc000ac8fd0) Data frame received for 5\nI0720 02:30:57.272597 1947 log.go:181] (0xc000512320) (5) Data frame handling\nI0720 02:30:57.272611 1947 log.go:181] (0xc000512320) (5) Data frame sent\nI0720 02:30:57.272622 1947 log.go:181] (0xc000ac8fd0) Data frame received for 5\nI0720 02:30:57.272631 1947 log.go:181] (0xc000512320) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32630\nConnection to 172.18.0.14 32630 port [tcp/32630] succeeded!\nI0720 02:30:57.272663 1947 log.go:181] (0xc000ac8fd0) Data frame received for 3\nI0720 02:30:57.272675 1947 log.go:181] (0xc00068cc80) (3) Data frame handling\nI0720 02:30:57.274377 1947 log.go:181] (0xc000ac8fd0) Data frame received for 1\nI0720 02:30:57.274405 1947 log.go:181] (0xc00043aaa0) (1) Data frame handling\nI0720 02:30:57.274420 1947 log.go:181] (0xc00043aaa0) (1) Data frame sent\nI0720 02:30:57.274448 1947 log.go:181] (0xc000ac8fd0) (0xc00043aaa0) Stream removed, broadcasting: 1\nI0720 02:30:57.274776 1947 log.go:181] (0xc000ac8fd0) Go away received\nI0720 02:30:57.274914 1947 log.go:181] (0xc000ac8fd0) (0xc00043aaa0) Stream removed, broadcasting: 1\nI0720 02:30:57.274948 1947 log.go:181] (0xc000ac8fd0) (0xc00068cc80) Stream removed, broadcasting: 3\nI0720 02:30:57.274969 1947 log.go:181] (0xc000ac8fd0) (0xc000512320) Stream removed, broadcasting: 5\n" Jul 20 02:30:57.283: INFO: stdout: "" Jul 20 02:30:57.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpod-affinityvr27j -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32630' Jul 20 02:30:57.520: INFO: stderr: "I0720 02:30:57.423620 1965 log.go:181] (0xc000e22dc0) (0xc0008a2000) Create stream\nI0720 02:30:57.423688 1965 log.go:181] (0xc000e22dc0) (0xc0008a2000) Stream added, broadcasting: 1\nI0720 02:30:57.427724 1965 log.go:181] (0xc000e22dc0) Reply frame received for 1\nI0720 02:30:57.428004 1965 log.go:181] (0xc000e22dc0) (0xc000b0a000) Create stream\nI0720 02:30:57.428105 1965 log.go:181] (0xc000e22dc0) (0xc000b0a000) Stream added, broadcasting: 3\nI0720 02:30:57.429666 1965 log.go:181] (0xc000e22dc0) Reply frame received for 3\nI0720 02:30:57.429711 1965 log.go:181] (0xc000e22dc0) (0xc000b0aaa0) Create stream\nI0720 02:30:57.429733 1965 log.go:181] (0xc000e22dc0) (0xc000b0aaa0) Stream added, broadcasting: 5\nI0720 02:30:57.430845 1965 log.go:181] (0xc000e22dc0) Reply frame received for 5\nI0720 02:30:57.513356 1965 log.go:181] (0xc000e22dc0) Data frame received for 5\nI0720 02:30:57.513395 1965 log.go:181] (0xc000b0aaa0) (5) Data frame handling\nI0720 02:30:57.513411 1965 log.go:181] (0xc000b0aaa0) (5) Data frame sent\nI0720 02:30:57.513423 1965 log.go:181] (0xc000e22dc0) Data frame received for 5\nI0720 02:30:57.513433 1965 log.go:181] (0xc000b0aaa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32630\nConnection to 172.18.0.12 32630 port [tcp/32630] succeeded!\nI0720 02:30:57.513480 1965 log.go:181] (0xc000e22dc0) Data frame received for 3\nI0720 02:30:57.513518 1965 log.go:181] (0xc000b0a000) (3) Data frame handling\nI0720 02:30:57.515130 1965 log.go:181] (0xc000e22dc0) Data frame received for 1\nI0720 02:30:57.515158 1965 log.go:181] (0xc0008a2000) (1) Data frame handling\nI0720 02:30:57.515177 1965 log.go:181] (0xc0008a2000) (1) Data frame sent\nI0720 02:30:57.515193 1965 log.go:181] (0xc000e22dc0) (0xc0008a2000) Stream removed, broadcasting: 1\nI0720 02:30:57.515556 1965 log.go:181] (0xc000e22dc0) (0xc0008a2000) Stream removed, broadcasting: 1\nI0720 02:30:57.515583 1965 log.go:181] (0xc000e22dc0) (0xc000b0a000) Stream removed, broadcasting: 3\nI0720 02:30:57.515597 1965 log.go:181] (0xc000e22dc0) (0xc000b0aaa0) Stream removed, broadcasting: 5\n" Jul 20 02:30:57.520: INFO: stdout: "" Jul 20 02:30:57.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpod-affinityvr27j -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32630/ ; done' Jul 20 02:30:57.839: INFO: stderr: "I0720 02:30:57.655981 1983 log.go:181] (0xc00003a2c0) (0xc000aa2b40) Create stream\nI0720 02:30:57.656042 1983 log.go:181] (0xc00003a2c0) (0xc000aa2b40) Stream added, broadcasting: 1\nI0720 02:30:57.658195 1983 log.go:181] (0xc00003a2c0) Reply frame received for 1\nI0720 02:30:57.658244 1983 log.go:181] (0xc00003a2c0) (0xc000989040) Create stream\nI0720 02:30:57.658262 1983 log.go:181] (0xc00003a2c0) (0xc000989040) Stream added, broadcasting: 3\nI0720 02:30:57.659379 1983 log.go:181] (0xc00003a2c0) Reply frame received for 3\nI0720 02:30:57.659414 1983 log.go:181] (0xc00003a2c0) (0xc000880820) Create stream\nI0720 02:30:57.659427 1983 log.go:181] (0xc00003a2c0) (0xc000880820) Stream added, broadcasting: 5\nI0720 02:30:57.660580 1983 log.go:181] (0xc00003a2c0) Reply frame received for 5\nI0720 02:30:57.733716 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.733779 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.733804 1983 log.go:181] (0xc000880820) (5) Data frame sent\nI0720 02:30:57.733831 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.733849 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.733867 1983 log.go:181] (0xc000989040) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.737215 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.737248 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.737278 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.737480 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.737501 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.737518 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.737531 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.737546 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.737560 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.744871 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.744906 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.744931 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.745379 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.745399 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.745422 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.745440 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.745451 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.745457 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.753916 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.754009 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.754132 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.754477 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.754550 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.754570 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.754821 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.754838 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.754851 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.758608 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.758624 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.758634 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.759185 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.759213 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.759231 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.759247 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.759253 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.759259 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.765822 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.765850 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.765861 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.765881 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.765894 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.765903 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.769568 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.769586 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.769597 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.770058 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.770071 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.770081 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.770123 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.770136 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.770147 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.774228 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.774250 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.774265 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.774611 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.774629 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.774637 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.774648 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.774655 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.774662 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.780185 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.780206 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.780230 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.780940 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.780967 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.780996 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.781016 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.781030 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.781039 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.787033 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.787060 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.787089 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.787418 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.787508 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.787540 1983 log.go:181] (0xc000880820) (5) Data frame sent\nI0720 02:30:57.787564 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.787583 1983 log.go:181] (0xc000880820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.787601 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.787627 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.787652 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.787670 1983 log.go:181] (0xc000880820) (5) Data frame sent\nI0720 02:30:57.792416 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.792439 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.792460 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.793436 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.793455 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.793472 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.793490 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.793534 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.793562 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.798838 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.798855 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.798865 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.799523 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.799545 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.799557 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.799573 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.799582 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.799592 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.806796 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.806829 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.806848 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.807412 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.807448 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.807478 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.807500 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.807518 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.807536 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.813246 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.813288 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.813322 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.813837 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.813855 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.813871 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.813886 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.813904 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.813914 1983 log.go:181] (0xc000880820) (5) Data frame sent\nI0720 02:30:57.813922 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.813930 1983 log.go:181] (0xc000880820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.813946 1983 log.go:181] (0xc000880820) (5) Data frame sent\nI0720 02:30:57.819920 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.819941 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.819968 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.820678 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.820698 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.820790 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.820952 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.820971 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.820988 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.826141 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.826160 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.826180 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.826820 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.826847 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.826863 1983 log.go:181] (0xc000880820) (5) Data frame sent\n+ echo\n+ curl -q -sI0720 02:30:57.826874 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.826884 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.826894 1983 log.go:181] (0xc000880820) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:57.826917 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.826952 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.826967 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.831108 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.831151 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.831186 1983 log.go:181] (0xc000989040) (3) Data frame sent\nI0720 02:30:57.831887 1983 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0720 02:30:57.831922 1983 log.go:181] (0xc000880820) (5) Data frame handling\nI0720 02:30:57.832132 1983 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0720 02:30:57.832160 1983 log.go:181] (0xc000989040) (3) Data frame handling\nI0720 02:30:57.833929 1983 log.go:181] (0xc00003a2c0) Data frame received for 1\nI0720 02:30:57.833953 1983 log.go:181] (0xc000aa2b40) (1) Data frame handling\nI0720 02:30:57.833965 1983 log.go:181] (0xc000aa2b40) (1) Data frame sent\nI0720 02:30:57.833982 1983 log.go:181] (0xc00003a2c0) (0xc000aa2b40) Stream removed, broadcasting: 1\nI0720 02:30:57.833997 1983 log.go:181] (0xc00003a2c0) Go away received\nI0720 02:30:57.834579 1983 log.go:181] (0xc00003a2c0) (0xc000aa2b40) Stream removed, broadcasting: 1\nI0720 02:30:57.834603 1983 log.go:181] (0xc00003a2c0) (0xc000989040) Stream removed, broadcasting: 3\nI0720 02:30:57.834615 1983 log.go:181] (0xc00003a2c0) (0xc000880820) Stream removed, broadcasting: 5\n" Jul 20 02:30:57.840: INFO: stdout: "\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd\naffinity-nodeport-timeout-lh4dd" Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Received response from host: affinity-nodeport-timeout-lh4dd Jul 20 02:30:57.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpod-affinityvr27j -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32630/' Jul 20 02:30:58.072: INFO: stderr: "I0720 02:30:57.994043 2001 log.go:181] (0xc000141810) (0xc000780460) Create stream\nI0720 02:30:57.994089 2001 log.go:181] (0xc000141810) (0xc000780460) Stream added, broadcasting: 1\nI0720 02:30:57.995697 2001 log.go:181] (0xc000141810) Reply frame received for 1\nI0720 02:30:57.995745 2001 log.go:181] (0xc000141810) (0xc00070a0a0) Create stream\nI0720 02:30:57.995770 2001 log.go:181] (0xc000141810) (0xc00070a0a0) Stream added, broadcasting: 3\nI0720 02:30:57.996686 2001 log.go:181] (0xc000141810) Reply frame received for 3\nI0720 02:30:57.996870 2001 log.go:181] (0xc000141810) (0xc0006625a0) Create stream\nI0720 02:30:57.996896 2001 log.go:181] (0xc000141810) (0xc0006625a0) Stream added, broadcasting: 5\nI0720 02:30:57.997839 2001 log.go:181] (0xc000141810) Reply frame received for 5\nI0720 02:30:58.058316 2001 log.go:181] (0xc000141810) Data frame received for 5\nI0720 02:30:58.058358 2001 log.go:181] (0xc0006625a0) (5) Data frame handling\nI0720 02:30:58.058390 2001 log.go:181] (0xc0006625a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:30:58.063812 2001 log.go:181] (0xc000141810) Data frame received for 3\nI0720 02:30:58.063857 2001 log.go:181] (0xc00070a0a0) (3) Data frame handling\nI0720 02:30:58.063897 2001 log.go:181] (0xc00070a0a0) (3) Data frame sent\nI0720 02:30:58.064850 2001 log.go:181] (0xc000141810) Data frame received for 5\nI0720 02:30:58.064887 2001 log.go:181] (0xc0006625a0) (5) Data frame handling\nI0720 02:30:58.064979 2001 log.go:181] (0xc000141810) Data frame received for 3\nI0720 02:30:58.065021 2001 log.go:181] (0xc00070a0a0) (3) Data frame handling\nI0720 02:30:58.066401 2001 log.go:181] (0xc000141810) Data frame received for 1\nI0720 02:30:58.066432 2001 log.go:181] (0xc000780460) (1) Data frame handling\nI0720 02:30:58.066457 2001 log.go:181] (0xc000780460) (1) Data frame sent\nI0720 02:30:58.066481 2001 log.go:181] (0xc000141810) (0xc000780460) Stream removed, broadcasting: 1\nI0720 02:30:58.066508 2001 log.go:181] (0xc000141810) Go away received\nI0720 02:30:58.067072 2001 log.go:181] (0xc000141810) (0xc000780460) Stream removed, broadcasting: 1\nI0720 02:30:58.067099 2001 log.go:181] (0xc000141810) (0xc00070a0a0) Stream removed, broadcasting: 3\nI0720 02:30:58.067111 2001 log.go:181] (0xc000141810) (0xc0006625a0) Stream removed, broadcasting: 5\n" Jul 20 02:30:58.073: INFO: stdout: "affinity-nodeport-timeout-lh4dd" Jul 20 02:31:13.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8631 execpod-affinityvr27j -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32630/' Jul 20 02:31:13.304: INFO: stderr: "I0720 02:31:13.217966 2019 log.go:181] (0xc00003b340) (0xc000b42500) Create stream\nI0720 02:31:13.218038 2019 log.go:181] (0xc00003b340) (0xc000b42500) Stream added, broadcasting: 1\nI0720 02:31:13.227931 2019 log.go:181] (0xc00003b340) Reply frame received for 1\nI0720 02:31:13.227989 2019 log.go:181] (0xc00003b340) (0xc000ca3180) Create stream\nI0720 02:31:13.228002 2019 log.go:181] (0xc00003b340) (0xc000ca3180) Stream added, broadcasting: 3\nI0720 02:31:13.228976 2019 log.go:181] (0xc00003b340) Reply frame received for 3\nI0720 02:31:13.229002 2019 log.go:181] (0xc00003b340) (0xc000b8a460) Create stream\nI0720 02:31:13.229010 2019 log.go:181] (0xc00003b340) (0xc000b8a460) Stream added, broadcasting: 5\nI0720 02:31:13.229923 2019 log.go:181] (0xc00003b340) Reply frame received for 5\nI0720 02:31:13.293640 2019 log.go:181] (0xc00003b340) Data frame received for 5\nI0720 02:31:13.293668 2019 log.go:181] (0xc000b8a460) (5) Data frame handling\nI0720 02:31:13.293687 2019 log.go:181] (0xc000b8a460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32630/\nI0720 02:31:13.296938 2019 log.go:181] (0xc00003b340) Data frame received for 3\nI0720 02:31:13.296975 2019 log.go:181] (0xc000ca3180) (3) Data frame handling\nI0720 02:31:13.297008 2019 log.go:181] (0xc000ca3180) (3) Data frame sent\nI0720 02:31:13.297491 2019 log.go:181] (0xc00003b340) Data frame received for 3\nI0720 02:31:13.297514 2019 log.go:181] (0xc000ca3180) (3) Data frame handling\nI0720 02:31:13.297753 2019 log.go:181] (0xc00003b340) Data frame received for 5\nI0720 02:31:13.297774 2019 log.go:181] (0xc000b8a460) (5) Data frame handling\nI0720 02:31:13.299379 2019 log.go:181] (0xc00003b340) Data frame received for 1\nI0720 02:31:13.299400 2019 log.go:181] (0xc000b42500) (1) Data frame handling\nI0720 02:31:13.299416 2019 log.go:181] (0xc000b42500) (1) Data frame sent\nI0720 02:31:13.299503 2019 log.go:181] (0xc00003b340) (0xc000b42500) Stream removed, broadcasting: 1\nI0720 02:31:13.299544 2019 log.go:181] (0xc00003b340) Go away received\nI0720 02:31:13.299869 2019 log.go:181] (0xc00003b340) (0xc000b42500) Stream removed, broadcasting: 1\nI0720 02:31:13.299885 2019 log.go:181] (0xc00003b340) (0xc000ca3180) Stream removed, broadcasting: 3\nI0720 02:31:13.299893 2019 log.go:181] (0xc00003b340) (0xc000b8a460) Stream removed, broadcasting: 5\n" Jul 20 02:31:13.305: INFO: stdout: "affinity-nodeport-timeout-z22bg" Jul 20 02:31:13.305: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-8631, will wait for the garbage collector to delete the pods Jul 20 02:31:13.657: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 158.64561ms Jul 20 02:31:14.257: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.264221ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:31:23.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8631" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:59.080 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":134,"skipped":2342,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:31:23.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 20 02:31:23.967: INFO: Waiting up to 5m0s for pod "pod-8b55aa16-86ad-44ed-9884-c5dbd94aa238" in namespace "emptydir-1720" to be "Succeeded or Failed" Jul 20 02:31:24.007: INFO: Pod "pod-8b55aa16-86ad-44ed-9884-c5dbd94aa238": Phase="Pending", Reason="", readiness=false. Elapsed: 40.330637ms Jul 20 02:31:26.011: INFO: Pod "pod-8b55aa16-86ad-44ed-9884-c5dbd94aa238": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043976764s Jul 20 02:31:28.015: INFO: Pod "pod-8b55aa16-86ad-44ed-9884-c5dbd94aa238": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048228879s STEP: Saw pod success Jul 20 02:31:28.015: INFO: Pod "pod-8b55aa16-86ad-44ed-9884-c5dbd94aa238" satisfied condition "Succeeded or Failed" Jul 20 02:31:28.019: INFO: Trying to get logs from node latest-worker2 pod pod-8b55aa16-86ad-44ed-9884-c5dbd94aa238 container test-container: STEP: delete the pod Jul 20 02:31:28.075: INFO: Waiting for pod pod-8b55aa16-86ad-44ed-9884-c5dbd94aa238 to disappear Jul 20 02:31:28.080: INFO: Pod pod-8b55aa16-86ad-44ed-9884-c5dbd94aa238 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:31:28.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1720" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":135,"skipped":2346,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:31:28.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 02:31:28.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-257a6f6e-121e-43c0-a3fa-f22743bd80d8" in namespace "downward-api-472" to be "Succeeded or Failed" Jul 20 02:31:28.512: INFO: Pod "downwardapi-volume-257a6f6e-121e-43c0-a3fa-f22743bd80d8": Phase="Pending", Reason="", readiness=false. Elapsed: 46.431359ms Jul 20 02:31:30.518: INFO: Pod "downwardapi-volume-257a6f6e-121e-43c0-a3fa-f22743bd80d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05255721s Jul 20 02:31:32.523: INFO: Pod "downwardapi-volume-257a6f6e-121e-43c0-a3fa-f22743bd80d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05750158s STEP: Saw pod success Jul 20 02:31:32.523: INFO: Pod "downwardapi-volume-257a6f6e-121e-43c0-a3fa-f22743bd80d8" satisfied condition "Succeeded or Failed" Jul 20 02:31:32.526: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-257a6f6e-121e-43c0-a3fa-f22743bd80d8 container client-container: STEP: delete the pod Jul 20 02:31:32.564: INFO: Waiting for pod downwardapi-volume-257a6f6e-121e-43c0-a3fa-f22743bd80d8 to disappear Jul 20 02:31:32.577: INFO: Pod downwardapi-volume-257a6f6e-121e-43c0-a3fa-f22743bd80d8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:31:32.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-472" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":136,"skipped":2354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:31:32.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-6796d12d-b08a-4145-9b4b-91279f8f6218 STEP: Creating a pod to test consume secrets Jul 20 02:31:32.694: INFO: Waiting up to 5m0s for pod "pod-secrets-55dfe9e9-d5b4-4c25-bf3b-82aad76b502f" in namespace "secrets-6006" to be "Succeeded or Failed" Jul 20 02:31:32.746: INFO: Pod "pod-secrets-55dfe9e9-d5b4-4c25-bf3b-82aad76b502f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.714334ms Jul 20 02:31:34.750: INFO: Pod "pod-secrets-55dfe9e9-d5b4-4c25-bf3b-82aad76b502f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055283014s Jul 20 02:31:36.754: INFO: Pod "pod-secrets-55dfe9e9-d5b4-4c25-bf3b-82aad76b502f": Phase="Running", Reason="", readiness=true. Elapsed: 4.059959989s Jul 20 02:31:38.759: INFO: Pod "pod-secrets-55dfe9e9-d5b4-4c25-bf3b-82aad76b502f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064352417s STEP: Saw pod success Jul 20 02:31:38.759: INFO: Pod "pod-secrets-55dfe9e9-d5b4-4c25-bf3b-82aad76b502f" satisfied condition "Succeeded or Failed" Jul 20 02:31:38.761: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-55dfe9e9-d5b4-4c25-bf3b-82aad76b502f container secret-volume-test: STEP: delete the pod Jul 20 02:31:38.799: INFO: Waiting for pod pod-secrets-55dfe9e9-d5b4-4c25-bf3b-82aad76b502f to disappear Jul 20 02:31:38.822: INFO: Pod pod-secrets-55dfe9e9-d5b4-4c25-bf3b-82aad76b502f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:31:38.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6006" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":137,"skipped":2383,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:31:38.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 20 02:31:43.617: INFO: &Pod{ObjectMeta:{send-events-9c0d3cae-7cea-464b-ba55-c1e9e988930c events-3272 /api/v1/namespaces/events-3272/pods/send-events-9c0d3cae-7cea-464b-ba55-c1e9e988930c 45f5be60-afd9-438c-a1cb-cf04b0e63960 98270 0 2020-07-20 02:31:39 +0000 UTC map[name:foo time:437275513] map[] [] [] [{e2e.test Update v1 2020-07-20 02:31:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:31:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.236\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8b8jf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8b8jf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8b8jf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:31:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:31:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:31:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.236,StartTime:2020-07-20 02:31:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:31:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://2fe0a2eafbc834bba24c561700c506f420d468aabb047e816ea6c73b71117e1e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jul 20 02:31:45.622: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 20 02:31:47.626: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:31:47.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3272" for this suite. • [SLOW TEST:8.861 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":294,"completed":138,"skipped":2401,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:31:47.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-434 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-434 STEP: creating replication controller externalsvc in namespace services-434 I0720 02:31:47.889138 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-434, replica count: 2 I0720 02:31:50.939495 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:31:53.939705 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jul 20 02:31:54.020: INFO: Creating new exec pod Jul 20 02:31:58.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-434 execpodljch6 -- /bin/sh -x -c nslookup clusterip-service.services-434.svc.cluster.local' Jul 20 02:31:58.296: INFO: stderr: "I0720 02:31:58.200099 2037 log.go:181] (0xc00018c370) (0xc000783680) Create stream\nI0720 02:31:58.200147 2037 log.go:181] (0xc00018c370) (0xc000783680) Stream added, broadcasting: 1\nI0720 02:31:58.201881 2037 log.go:181] (0xc00018c370) Reply frame received for 1\nI0720 02:31:58.201920 2037 log.go:181] (0xc00018c370) (0xc0004e94a0) Create stream\nI0720 02:31:58.201930 2037 log.go:181] (0xc00018c370) (0xc0004e94a0) Stream added, broadcasting: 3\nI0720 02:31:58.202738 2037 log.go:181] (0xc00018c370) Reply frame received for 3\nI0720 02:31:58.202772 2037 log.go:181] (0xc00018c370) (0xc0001feb40) Create stream\nI0720 02:31:58.202784 2037 log.go:181] (0xc00018c370) (0xc0001feb40) Stream added, broadcasting: 5\nI0720 02:31:58.203667 2037 log.go:181] (0xc00018c370) Reply frame received for 5\nI0720 02:31:58.281670 2037 log.go:181] (0xc00018c370) Data frame received for 5\nI0720 02:31:58.281697 2037 log.go:181] (0xc0001feb40) (5) Data frame handling\nI0720 02:31:58.281710 2037 log.go:181] (0xc0001feb40) (5) Data frame sent\n+ nslookup clusterip-service.services-434.svc.cluster.local\nI0720 02:31:58.288414 2037 log.go:181] (0xc00018c370) Data frame received for 3\nI0720 02:31:58.288439 2037 log.go:181] (0xc0004e94a0) (3) Data frame handling\nI0720 02:31:58.288455 2037 log.go:181] (0xc0004e94a0) (3) Data frame sent\nI0720 02:31:58.289268 2037 log.go:181] (0xc00018c370) Data frame received for 3\nI0720 02:31:58.289281 2037 log.go:181] (0xc0004e94a0) (3) Data frame handling\nI0720 02:31:58.289288 2037 log.go:181] (0xc0004e94a0) (3) Data frame sent\nI0720 02:31:58.289603 2037 log.go:181] (0xc00018c370) Data frame received for 3\nI0720 02:31:58.289618 2037 log.go:181] (0xc0004e94a0) (3) Data frame handling\nI0720 02:31:58.289644 2037 log.go:181] (0xc00018c370) Data frame received for 5\nI0720 02:31:58.289662 2037 log.go:181] (0xc0001feb40) (5) Data frame handling\nI0720 02:31:58.291311 2037 log.go:181] (0xc00018c370) Data frame received for 1\nI0720 02:31:58.291399 2037 log.go:181] (0xc000783680) (1) Data frame handling\nI0720 02:31:58.291430 2037 log.go:181] (0xc000783680) (1) Data frame sent\nI0720 02:31:58.291444 2037 log.go:181] (0xc00018c370) (0xc000783680) Stream removed, broadcasting: 1\nI0720 02:31:58.291459 2037 log.go:181] (0xc00018c370) Go away received\nI0720 02:31:58.291789 2037 log.go:181] (0xc00018c370) (0xc000783680) Stream removed, broadcasting: 1\nI0720 02:31:58.291805 2037 log.go:181] (0xc00018c370) (0xc0004e94a0) Stream removed, broadcasting: 3\nI0720 02:31:58.291812 2037 log.go:181] (0xc00018c370) (0xc0001feb40) Stream removed, broadcasting: 5\n" Jul 20 02:31:58.296: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-434.svc.cluster.local\tcanonical name = externalsvc.services-434.svc.cluster.local.\nName:\texternalsvc.services-434.svc.cluster.local\nAddress: 10.97.17.188\n\n" STEP: deleting ReplicationController externalsvc in namespace services-434, will wait for the garbage collector to delete the pods Jul 20 02:31:58.352: INFO: Deleting ReplicationController externalsvc took: 3.457323ms Jul 20 02:31:58.852: INFO: Terminating ReplicationController externalsvc pods took: 500.211967ms Jul 20 02:32:13.987: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:32:14.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-434" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:26.326 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":294,"completed":139,"skipped":2409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:32:14.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-619817e1-b9ef-45b9-861f-6f244981c6a4 STEP: Creating a pod to test consume secrets Jul 20 02:32:14.132: INFO: Waiting up to 5m0s for pod "pod-secrets-76ff8634-2eb0-411a-ba0d-a450b418a335" in namespace "secrets-6703" to be "Succeeded or Failed" Jul 20 02:32:14.136: INFO: Pod "pod-secrets-76ff8634-2eb0-411a-ba0d-a450b418a335": Phase="Pending", Reason="", readiness=false. Elapsed: 3.923576ms Jul 20 02:32:16.188: INFO: Pod "pod-secrets-76ff8634-2eb0-411a-ba0d-a450b418a335": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056138277s Jul 20 02:32:18.192: INFO: Pod "pod-secrets-76ff8634-2eb0-411a-ba0d-a450b418a335": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060014646s STEP: Saw pod success Jul 20 02:32:18.192: INFO: Pod "pod-secrets-76ff8634-2eb0-411a-ba0d-a450b418a335" satisfied condition "Succeeded or Failed" Jul 20 02:32:18.195: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-76ff8634-2eb0-411a-ba0d-a450b418a335 container secret-env-test: STEP: delete the pod Jul 20 02:32:18.434: INFO: Waiting for pod pod-secrets-76ff8634-2eb0-411a-ba0d-a450b418a335 to disappear Jul 20 02:32:18.589: INFO: Pod pod-secrets-76ff8634-2eb0-411a-ba0d-a450b418a335 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:32:18.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6703" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":294,"completed":140,"skipped":2448,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:32:18.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:32:18.960: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:32:25.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4795" for this suite. • [SLOW TEST:6.272 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":294,"completed":141,"skipped":2474,"failed":0} SS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:32:25.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:32:25.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6755" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":294,"completed":142,"skipped":2476,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:32:25.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6948.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6948.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 02:32:31.313: INFO: DNS probes using dns-test-3780a1d2-eb46-44df-a6cb-ca98c96284e3 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6948.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6948.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 02:32:37.464: INFO: File wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:37.466: INFO: File jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:37.466: INFO: Lookups using dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 failed for: [wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local] Jul 20 02:32:42.470: INFO: File wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:42.473: INFO: File jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:42.473: INFO: Lookups using dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 failed for: [wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local] Jul 20 02:32:47.471: INFO: File wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:47.474: INFO: File jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:47.474: INFO: Lookups using dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 failed for: [wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local] Jul 20 02:32:52.472: INFO: File wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:52.476: INFO: File jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:52.476: INFO: Lookups using dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 failed for: [wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local] Jul 20 02:32:57.474: INFO: File wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:57.478: INFO: File jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local from pod dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 02:32:57.478: INFO: Lookups using dns-6948/dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 failed for: [wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local] Jul 20 02:33:02.476: INFO: DNS probes using dns-test-ed590dd4-5bd1-43ee-be92-44883ba5f401 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6948.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6948.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6948.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6948.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 02:33:11.059: INFO: DNS probes using dns-test-211a8e15-3f26-4b9b-a7ee-6942dd3b4469 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:33:11.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6948" for this suite. • [SLOW TEST:46.028 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":294,"completed":143,"skipped":2493,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:33:11.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-9503b53e-01c4-456c-99d7-264065b8bcaf STEP: Creating a pod to test consume configMaps Jul 20 02:33:11.746: INFO: Waiting up to 5m0s for pod "pod-configmaps-e4302be5-ec5e-4e1e-92fc-e30ff93ee575" in namespace "configmap-9427" to be "Succeeded or Failed" Jul 20 02:33:11.749: INFO: Pod "pod-configmaps-e4302be5-ec5e-4e1e-92fc-e30ff93ee575": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890995ms Jul 20 02:33:13.842: INFO: Pod "pod-configmaps-e4302be5-ec5e-4e1e-92fc-e30ff93ee575": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095805026s Jul 20 02:33:15.846: INFO: Pod "pod-configmaps-e4302be5-ec5e-4e1e-92fc-e30ff93ee575": Phase="Running", Reason="", readiness=true. Elapsed: 4.100557235s Jul 20 02:33:17.850: INFO: Pod "pod-configmaps-e4302be5-ec5e-4e1e-92fc-e30ff93ee575": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104032715s STEP: Saw pod success Jul 20 02:33:17.850: INFO: Pod "pod-configmaps-e4302be5-ec5e-4e1e-92fc-e30ff93ee575" satisfied condition "Succeeded or Failed" Jul 20 02:33:17.852: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-e4302be5-ec5e-4e1e-92fc-e30ff93ee575 container configmap-volume-test: STEP: delete the pod Jul 20 02:33:17.894: INFO: Waiting for pod pod-configmaps-e4302be5-ec5e-4e1e-92fc-e30ff93ee575 to disappear Jul 20 02:33:17.917: INFO: Pod pod-configmaps-e4302be5-ec5e-4e1e-92fc-e30ff93ee575 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:33:17.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9427" for this suite. • [SLOW TEST:6.722 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":144,"skipped":2510,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:33:17.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:33:18.017: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 20 02:33:20.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6025 create -f -' Jul 20 02:33:32.208: INFO: stderr: "" Jul 20 02:33:32.208: INFO: stdout: "e2e-test-crd-publish-openapi-8187-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jul 20 02:33:32.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6025 delete e2e-test-crd-publish-openapi-8187-crds test-cr' Jul 20 02:33:32.319: INFO: stderr: "" Jul 20 02:33:32.319: INFO: stdout: "e2e-test-crd-publish-openapi-8187-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jul 20 02:33:32.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6025 apply -f -' Jul 20 02:33:32.660: INFO: stderr: "" Jul 20 02:33:32.660: INFO: stdout: "e2e-test-crd-publish-openapi-8187-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jul 20 02:33:32.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6025 delete e2e-test-crd-publish-openapi-8187-crds test-cr' Jul 20 02:33:33.137: INFO: stderr: "" Jul 20 02:33:33.137: INFO: stdout: "e2e-test-crd-publish-openapi-8187-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 20 02:33:33.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8187-crds' Jul 20 02:33:33.490: INFO: stderr: "" Jul 20 02:33:33.490: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8187-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:33:36.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6025" for this suite. • [SLOW TEST:19.374 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":294,"completed":145,"skipped":2512,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:33:37.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1782 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1782 STEP: Creating statefulset with conflicting port in namespace statefulset-1782 STEP: Waiting until pod test-pod will start running in namespace statefulset-1782 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1782 Jul 20 02:33:52.545: INFO: Observed stateful pod in namespace: statefulset-1782, name: ss-0, uid: a2da59b9-1691-4f62-820b-22e0f84f7cfa, status phase: Pending. Waiting for statefulset controller to delete. Jul 20 02:33:52.552: INFO: Observed stateful pod in namespace: statefulset-1782, name: ss-0, uid: a2da59b9-1691-4f62-820b-22e0f84f7cfa, status phase: Failed. Waiting for statefulset controller to delete. Jul 20 02:33:53.373: INFO: Observed stateful pod in namespace: statefulset-1782, name: ss-0, uid: a2da59b9-1691-4f62-820b-22e0f84f7cfa, status phase: Failed. Waiting for statefulset controller to delete. Jul 20 02:33:53.844: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1782 STEP: Removing pod with conflicting port in namespace statefulset-1782 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1782 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 20 02:34:00.788: INFO: Deleting all statefulset in ns statefulset-1782 Jul 20 02:34:00.791: INFO: Scaling statefulset ss to 0 Jul 20 02:34:20.820: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 02:34:20.822: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:34:20.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1782" for this suite. • [SLOW TEST:43.582 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":294,"completed":146,"skipped":2532,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:34:20.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 20 02:34:20.941: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 02:34:20.956: INFO: Waiting for terminating namespaces to be deleted... Jul 20 02:34:20.959: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 20 02:34:20.964: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 02:34:20.964: INFO: Container coredns ready: true, restart count 0 Jul 20 02:34:20.964: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Jul 20 02:34:20.964: INFO: Container coredns ready: true, restart count 0 Jul 20 02:34:20.964: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 02:34:20.964: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 02:34:20.964: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 02:34:20.964: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 02:34:20.964: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 02:34:20.964: INFO: Container local-path-provisioner ready: true, restart count 0 Jul 20 02:34:20.964: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 20 02:34:20.973: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 02:34:20.973: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 02:34:20.973: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 02:34:20.973: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d0dc981b-953b-42bd-ba07-c0770da5e0bf 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d0dc981b-953b-42bd-ba07-c0770da5e0bf off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-d0dc981b-953b-42bd-ba07-c0770da5e0bf [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:34:29.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3020" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.239 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":294,"completed":147,"skipped":2537,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:34:29.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 20 02:34:36.689: INFO: 10 pods remaining Jul 20 02:34:36.689: INFO: 9 pods has nil DeletionTimestamp Jul 20 02:34:36.689: INFO: Jul 20 02:34:38.501: INFO: 0 pods remaining Jul 20 02:34:38.501: INFO: 0 pods has nil DeletionTimestamp Jul 20 02:34:38.501: INFO: Jul 20 02:34:39.162: INFO: 0 pods remaining Jul 20 02:34:39.162: INFO: 0 pods has nil DeletionTimestamp Jul 20 02:34:39.162: INFO: STEP: Gathering metrics W0720 02:34:40.594069 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 20 02:35:42.622: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:35:42.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6670" for this suite. • [SLOW TEST:73.498 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":294,"completed":148,"skipped":2541,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:35:42.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 20 02:35:43.790: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 20 02:35:45.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809343, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809343, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809343, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809343, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-84c84cf5f9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:35:47.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809343, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809343, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809343, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809343, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-84c84cf5f9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:35:50.878: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:35:50.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:35:52.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1879" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.462 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":294,"completed":149,"skipped":2542,"failed":0} SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:35:52.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:35:52.167: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9706 I0720 02:35:52.182146 8 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9706, replica count: 1 I0720 02:35:53.232547 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:35:54.232835 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:35:55.232964 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:35:56.233188 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 02:35:56.389: INFO: Created: latency-svc-x5btr Jul 20 02:35:56.409: INFO: Got endpoints: latency-svc-x5btr [75.62768ms] Jul 20 02:35:56.482: INFO: Created: latency-svc-4kjvd Jul 20 02:35:56.526: INFO: Got endpoints: latency-svc-4kjvd [117.634271ms] Jul 20 02:35:56.534: INFO: Created: latency-svc-8jnbc Jul 20 02:35:56.552: INFO: Got endpoints: latency-svc-8jnbc [143.257198ms] Jul 20 02:35:56.607: INFO: Created: latency-svc-pbnms Jul 20 02:35:56.661: INFO: Got endpoints: latency-svc-pbnms [252.178398ms] Jul 20 02:35:56.715: INFO: Created: latency-svc-wxggs Jul 20 02:35:56.739: INFO: Got endpoints: latency-svc-wxggs [329.477743ms] Jul 20 02:35:56.808: INFO: Created: latency-svc-p4lr8 Jul 20 02:35:56.822: INFO: Got endpoints: latency-svc-p4lr8 [412.590873ms] Jul 20 02:35:56.846: INFO: Created: latency-svc-g82qr Jul 20 02:35:56.858: INFO: Got endpoints: latency-svc-g82qr [448.691834ms] Jul 20 02:35:56.889: INFO: Created: latency-svc-k248k Jul 20 02:35:56.940: INFO: Got endpoints: latency-svc-k248k [531.279977ms] Jul 20 02:35:56.966: INFO: Created: latency-svc-clfhp Jul 20 02:35:56.982: INFO: Got endpoints: latency-svc-clfhp [572.244677ms] Jul 20 02:35:57.002: INFO: Created: latency-svc-qpvtn Jul 20 02:35:57.015: INFO: Got endpoints: latency-svc-qpvtn [605.300113ms] Jul 20 02:35:57.038: INFO: Created: latency-svc-xbx79 Jul 20 02:35:57.078: INFO: Got endpoints: latency-svc-xbx79 [668.596428ms] Jul 20 02:35:57.092: INFO: Created: latency-svc-tc6s7 Jul 20 02:35:57.108: INFO: Got endpoints: latency-svc-tc6s7 [698.315023ms] Jul 20 02:35:57.134: INFO: Created: latency-svc-4qjmq Jul 20 02:35:57.150: INFO: Got endpoints: latency-svc-4qjmq [740.50299ms] Jul 20 02:35:57.176: INFO: Created: latency-svc-5k6dk Jul 20 02:35:57.246: INFO: Got endpoints: latency-svc-5k6dk [837.014592ms] Jul 20 02:35:57.253: INFO: Created: latency-svc-5ftgj Jul 20 02:35:57.258: INFO: Got endpoints: latency-svc-5ftgj [849.235522ms] Jul 20 02:35:57.278: INFO: Created: latency-svc-m22zl Jul 20 02:35:57.290: INFO: Got endpoints: latency-svc-m22zl [880.302999ms] Jul 20 02:35:57.344: INFO: Created: latency-svc-j488q Jul 20 02:35:57.401: INFO: Got endpoints: latency-svc-j488q [874.907578ms] Jul 20 02:35:57.434: INFO: Created: latency-svc-8r8sm Jul 20 02:35:57.464: INFO: Got endpoints: latency-svc-8r8sm [911.774071ms] Jul 20 02:35:57.494: INFO: Created: latency-svc-rw652 Jul 20 02:35:57.534: INFO: Got endpoints: latency-svc-rw652 [872.984168ms] Jul 20 02:35:57.560: INFO: Created: latency-svc-g9wnj Jul 20 02:35:57.572: INFO: Got endpoints: latency-svc-g9wnj [833.456818ms] Jul 20 02:35:57.596: INFO: Created: latency-svc-rfhnr Jul 20 02:35:57.605: INFO: Got endpoints: latency-svc-rfhnr [71.515424ms] Jul 20 02:35:57.626: INFO: Created: latency-svc-6s9vt Jul 20 02:35:57.689: INFO: Got endpoints: latency-svc-6s9vt [866.836739ms] Jul 20 02:35:57.722: INFO: Created: latency-svc-7rzqr Jul 20 02:35:57.738: INFO: Got endpoints: latency-svc-7rzqr [879.822704ms] Jul 20 02:35:57.783: INFO: Created: latency-svc-xrsqm Jul 20 02:35:57.826: INFO: Got endpoints: latency-svc-xrsqm [885.781592ms] Jul 20 02:35:57.860: INFO: Created: latency-svc-c28zx Jul 20 02:35:57.872: INFO: Got endpoints: latency-svc-c28zx [890.738925ms] Jul 20 02:35:57.890: INFO: Created: latency-svc-7hjq8 Jul 20 02:35:57.989: INFO: Got endpoints: latency-svc-7hjq8 [973.996047ms] Jul 20 02:35:58.017: INFO: Created: latency-svc-6kpgk Jul 20 02:35:58.051: INFO: Got endpoints: latency-svc-6kpgk [973.040006ms] Jul 20 02:35:58.077: INFO: Created: latency-svc-mqb5d Jul 20 02:35:58.120: INFO: Got endpoints: latency-svc-mqb5d [1.011713729s] Jul 20 02:35:58.148: INFO: Created: latency-svc-lpcpl Jul 20 02:35:58.165: INFO: Got endpoints: latency-svc-lpcpl [1.015206964s] Jul 20 02:35:58.202: INFO: Created: latency-svc-fnwzb Jul 20 02:35:58.269: INFO: Got endpoints: latency-svc-fnwzb [1.023370163s] Jul 20 02:35:58.281: INFO: Created: latency-svc-fhzhm Jul 20 02:35:58.297: INFO: Got endpoints: latency-svc-fhzhm [1.038998262s] Jul 20 02:35:58.425: INFO: Created: latency-svc-znm2z Jul 20 02:35:58.435: INFO: Got endpoints: latency-svc-znm2z [1.145433896s] Jul 20 02:35:58.460: INFO: Created: latency-svc-ln2lw Jul 20 02:35:58.472: INFO: Got endpoints: latency-svc-ln2lw [1.070606374s] Jul 20 02:35:58.496: INFO: Created: latency-svc-8nkn5 Jul 20 02:35:58.514: INFO: Got endpoints: latency-svc-8nkn5 [1.050658944s] Jul 20 02:35:58.619: INFO: Created: latency-svc-p8tt2 Jul 20 02:35:58.629: INFO: Got endpoints: latency-svc-p8tt2 [1.056342819s] Jul 20 02:35:58.651: INFO: Created: latency-svc-46rrb Jul 20 02:35:58.676: INFO: Got endpoints: latency-svc-46rrb [1.070134824s] Jul 20 02:35:58.755: INFO: Created: latency-svc-hqxvs Jul 20 02:35:58.761: INFO: Got endpoints: latency-svc-hqxvs [1.072258646s] Jul 20 02:35:58.809: INFO: Created: latency-svc-z6tmh Jul 20 02:35:58.821: INFO: Got endpoints: latency-svc-z6tmh [1.083418128s] Jul 20 02:35:58.905: INFO: Created: latency-svc-j7z8q Jul 20 02:35:58.937: INFO: Got endpoints: latency-svc-j7z8q [1.111084757s] Jul 20 02:35:58.988: INFO: Created: latency-svc-jrs4p Jul 20 02:35:59.001: INFO: Got endpoints: latency-svc-jrs4p [1.128775532s] Jul 20 02:35:59.055: INFO: Created: latency-svc-8ln45 Jul 20 02:35:59.066: INFO: Got endpoints: latency-svc-8ln45 [1.076937593s] Jul 20 02:35:59.097: INFO: Created: latency-svc-kggdr Jul 20 02:35:59.110: INFO: Got endpoints: latency-svc-kggdr [1.059227449s] Jul 20 02:35:59.137: INFO: Created: latency-svc-zt5cq Jul 20 02:35:59.152: INFO: Got endpoints: latency-svc-zt5cq [1.032367684s] Jul 20 02:35:59.227: INFO: Created: latency-svc-ckcq8 Jul 20 02:35:59.252: INFO: Got endpoints: latency-svc-ckcq8 [1.086916501s] Jul 20 02:35:59.306: INFO: Created: latency-svc-j9ljp Jul 20 02:35:59.321: INFO: Got endpoints: latency-svc-j9ljp [1.051647559s] Jul 20 02:35:59.383: INFO: Created: latency-svc-tfxd2 Jul 20 02:35:59.400: INFO: Got endpoints: latency-svc-tfxd2 [1.102315428s] Jul 20 02:35:59.426: INFO: Created: latency-svc-fwssw Jul 20 02:35:59.448: INFO: Got endpoints: latency-svc-fwssw [1.012386743s] Jul 20 02:35:59.474: INFO: Created: latency-svc-4g4gx Jul 20 02:35:59.557: INFO: Got endpoints: latency-svc-4g4gx [1.085397493s] Jul 20 02:35:59.560: INFO: Created: latency-svc-9h4z2 Jul 20 02:35:59.581: INFO: Got endpoints: latency-svc-9h4z2 [1.066578671s] Jul 20 02:35:59.616: INFO: Created: latency-svc-k764x Jul 20 02:35:59.626: INFO: Got endpoints: latency-svc-k764x [996.831607ms] Jul 20 02:35:59.708: INFO: Created: latency-svc-bdvkc Jul 20 02:35:59.737: INFO: Got endpoints: latency-svc-bdvkc [1.06175516s] Jul 20 02:35:59.738: INFO: Created: latency-svc-nfkjv Jul 20 02:35:59.752: INFO: Got endpoints: latency-svc-nfkjv [990.769609ms] Jul 20 02:35:59.803: INFO: Created: latency-svc-wbpm9 Jul 20 02:35:59.876: INFO: Got endpoints: latency-svc-wbpm9 [1.054389309s] Jul 20 02:35:59.877: INFO: Created: latency-svc-dklw5 Jul 20 02:35:59.885: INFO: Got endpoints: latency-svc-dklw5 [947.208384ms] Jul 20 02:35:59.906: INFO: Created: latency-svc-sw5wm Jul 20 02:35:59.921: INFO: Got endpoints: latency-svc-sw5wm [920.067564ms] Jul 20 02:35:59.948: INFO: Created: latency-svc-2hnnk Jul 20 02:35:59.963: INFO: Got endpoints: latency-svc-2hnnk [897.25688ms] Jul 20 02:36:00.014: INFO: Created: latency-svc-lhk96 Jul 20 02:36:00.036: INFO: Got endpoints: latency-svc-lhk96 [925.926295ms] Jul 20 02:36:00.055: INFO: Created: latency-svc-6zcfq Jul 20 02:36:00.072: INFO: Got endpoints: latency-svc-6zcfq [919.801836ms] Jul 20 02:36:00.156: INFO: Created: latency-svc-qjbkj Jul 20 02:36:00.160: INFO: Got endpoints: latency-svc-qjbkj [907.707406ms] Jul 20 02:36:00.181: INFO: Created: latency-svc-bm8lb Jul 20 02:36:00.200: INFO: Got endpoints: latency-svc-bm8lb [878.657558ms] Jul 20 02:36:00.230: INFO: Created: latency-svc-rpnl9 Jul 20 02:36:00.241: INFO: Got endpoints: latency-svc-rpnl9 [841.094392ms] Jul 20 02:36:00.306: INFO: Created: latency-svc-mzktj Jul 20 02:36:00.350: INFO: Got endpoints: latency-svc-mzktj [902.233856ms] Jul 20 02:36:00.350: INFO: Created: latency-svc-hscl4 Jul 20 02:36:00.362: INFO: Got endpoints: latency-svc-hscl4 [804.501202ms] Jul 20 02:36:00.379: INFO: Created: latency-svc-mqhhd Jul 20 02:36:00.391: INFO: Got endpoints: latency-svc-mqhhd [810.192578ms] Jul 20 02:36:00.491: INFO: Created: latency-svc-f4prv Jul 20 02:36:00.496: INFO: Got endpoints: latency-svc-f4prv [870.753742ms] Jul 20 02:36:00.535: INFO: Created: latency-svc-7l9d4 Jul 20 02:36:00.549: INFO: Got endpoints: latency-svc-7l9d4 [810.97635ms] Jul 20 02:36:00.578: INFO: Created: latency-svc-68zd2 Jul 20 02:36:00.652: INFO: Got endpoints: latency-svc-68zd2 [900.54957ms] Jul 20 02:36:00.675: INFO: Created: latency-svc-q86rr Jul 20 02:36:00.681: INFO: Got endpoints: latency-svc-q86rr [805.099722ms] Jul 20 02:36:00.727: INFO: Created: latency-svc-7htv8 Jul 20 02:36:00.751: INFO: Got endpoints: latency-svc-7htv8 [866.586396ms] Jul 20 02:36:00.820: INFO: Created: latency-svc-j5s6w Jul 20 02:36:00.860: INFO: Got endpoints: latency-svc-j5s6w [938.501576ms] Jul 20 02:36:00.860: INFO: Created: latency-svc-kpldj Jul 20 02:36:00.909: INFO: Got endpoints: latency-svc-kpldj [945.83011ms] Jul 20 02:36:00.970: INFO: Created: latency-svc-bcgxg Jul 20 02:36:00.974: INFO: Got endpoints: latency-svc-bcgxg [937.380707ms] Jul 20 02:36:01.040: INFO: Created: latency-svc-4hk6x Jul 20 02:36:01.055: INFO: Got endpoints: latency-svc-4hk6x [982.612594ms] Jul 20 02:36:01.132: INFO: Created: latency-svc-kq447 Jul 20 02:36:01.147: INFO: Got endpoints: latency-svc-kq447 [986.836083ms] Jul 20 02:36:01.171: INFO: Created: latency-svc-dgwtx Jul 20 02:36:01.189: INFO: Got endpoints: latency-svc-dgwtx [989.048268ms] Jul 20 02:36:01.214: INFO: Created: latency-svc-ggv9z Jul 20 02:36:01.318: INFO: Got endpoints: latency-svc-ggv9z [1.076799223s] Jul 20 02:36:01.322: INFO: Created: latency-svc-56fp8 Jul 20 02:36:01.334: INFO: Got endpoints: latency-svc-56fp8 [984.10279ms] Jul 20 02:36:01.370: INFO: Created: latency-svc-bbbg9 Jul 20 02:36:01.381: INFO: Got endpoints: latency-svc-bbbg9 [1.018913821s] Jul 20 02:36:01.468: INFO: Created: latency-svc-xv9mx Jul 20 02:36:01.472: INFO: Got endpoints: latency-svc-xv9mx [1.080848129s] Jul 20 02:36:01.507: INFO: Created: latency-svc-bw787 Jul 20 02:36:01.520: INFO: Got endpoints: latency-svc-bw787 [1.023074743s] Jul 20 02:36:01.537: INFO: Created: latency-svc-drb74 Jul 20 02:36:01.550: INFO: Got endpoints: latency-svc-drb74 [1.000984904s] Jul 20 02:36:01.635: INFO: Created: latency-svc-tsdhc Jul 20 02:36:01.664: INFO: Created: latency-svc-k7r24 Jul 20 02:36:01.664: INFO: Got endpoints: latency-svc-tsdhc [1.011485759s] Jul 20 02:36:01.688: INFO: Got endpoints: latency-svc-k7r24 [1.006953204s] Jul 20 02:36:01.710: INFO: Created: latency-svc-rhdkq Jul 20 02:36:01.724: INFO: Got endpoints: latency-svc-rhdkq [973.134464ms] Jul 20 02:36:01.797: INFO: Created: latency-svc-xtcgd Jul 20 02:36:01.801: INFO: Got endpoints: latency-svc-xtcgd [941.075063ms] Jul 20 02:36:01.844: INFO: Created: latency-svc-fjt5x Jul 20 02:36:01.857: INFO: Got endpoints: latency-svc-fjt5x [948.344211ms] Jul 20 02:36:01.879: INFO: Created: latency-svc-k6wjh Jul 20 02:36:01.894: INFO: Got endpoints: latency-svc-k6wjh [919.922594ms] Jul 20 02:36:01.976: INFO: Created: latency-svc-fbs2w Jul 20 02:36:01.981: INFO: Got endpoints: latency-svc-fbs2w [925.992609ms] Jul 20 02:36:02.047: INFO: Created: latency-svc-57bnj Jul 20 02:36:02.068: INFO: Got endpoints: latency-svc-57bnj [921.483472ms] Jul 20 02:36:02.150: INFO: Created: latency-svc-hn9rr Jul 20 02:36:02.164: INFO: Got endpoints: latency-svc-hn9rr [975.415472ms] Jul 20 02:36:02.185: INFO: Created: latency-svc-7z7sw Jul 20 02:36:02.201: INFO: Got endpoints: latency-svc-7z7sw [883.084721ms] Jul 20 02:36:02.220: INFO: Created: latency-svc-x6tjb Jul 20 02:36:02.237: INFO: Got endpoints: latency-svc-x6tjb [902.63303ms] Jul 20 02:36:02.306: INFO: Created: latency-svc-x7kqb Jul 20 02:36:02.317: INFO: Got endpoints: latency-svc-x7kqb [936.290064ms] Jul 20 02:36:02.357: INFO: Created: latency-svc-zzkpk Jul 20 02:36:02.369: INFO: Got endpoints: latency-svc-zzkpk [896.933272ms] Jul 20 02:36:02.389: INFO: Created: latency-svc-x9vjl Jul 20 02:36:02.468: INFO: Got endpoints: latency-svc-x9vjl [948.014226ms] Jul 20 02:36:02.470: INFO: Created: latency-svc-bjqmq Jul 20 02:36:02.483: INFO: Got endpoints: latency-svc-bjqmq [933.710784ms] Jul 20 02:36:02.504: INFO: Created: latency-svc-f98qt Jul 20 02:36:02.520: INFO: Got endpoints: latency-svc-f98qt [855.804183ms] Jul 20 02:36:02.545: INFO: Created: latency-svc-9x6j7 Jul 20 02:36:02.640: INFO: Got endpoints: latency-svc-9x6j7 [952.646632ms] Jul 20 02:36:02.653: INFO: Created: latency-svc-qpjz9 Jul 20 02:36:02.669: INFO: Got endpoints: latency-svc-qpjz9 [944.324383ms] Jul 20 02:36:02.725: INFO: Created: latency-svc-6gb5z Jul 20 02:36:02.826: INFO: Got endpoints: latency-svc-6gb5z [1.02504079s] Jul 20 02:36:02.829: INFO: Created: latency-svc-29852 Jul 20 02:36:02.837: INFO: Got endpoints: latency-svc-29852 [979.803571ms] Jul 20 02:36:02.869: INFO: Created: latency-svc-lkccz Jul 20 02:36:02.904: INFO: Got endpoints: latency-svc-lkccz [1.010305278s] Jul 20 02:36:03.006: INFO: Created: latency-svc-vrvdm Jul 20 02:36:03.018: INFO: Got endpoints: latency-svc-vrvdm [1.037101038s] Jul 20 02:36:03.037: INFO: Created: latency-svc-pptl4 Jul 20 02:36:03.048: INFO: Got endpoints: latency-svc-pptl4 [979.625867ms] Jul 20 02:36:03.067: INFO: Created: latency-svc-chc4c Jul 20 02:36:03.079: INFO: Got endpoints: latency-svc-chc4c [914.193651ms] Jul 20 02:36:03.097: INFO: Created: latency-svc-dgmx8 Jul 20 02:36:03.137: INFO: Got endpoints: latency-svc-dgmx8 [936.251264ms] Jul 20 02:36:03.144: INFO: Created: latency-svc-fxrsj Jul 20 02:36:03.157: INFO: Got endpoints: latency-svc-fxrsj [920.01425ms] Jul 20 02:36:03.208: INFO: Created: latency-svc-hgcr6 Jul 20 02:36:03.230: INFO: Got endpoints: latency-svc-hgcr6 [912.189025ms] Jul 20 02:36:03.282: INFO: Created: latency-svc-vzvhh Jul 20 02:36:03.300: INFO: Got endpoints: latency-svc-vzvhh [931.064903ms] Jul 20 02:36:03.337: INFO: Created: latency-svc-vsm6p Jul 20 02:36:03.379: INFO: Got endpoints: latency-svc-vsm6p [911.765027ms] Jul 20 02:36:03.473: INFO: Created: latency-svc-cw8qz Jul 20 02:36:03.483: INFO: Got endpoints: latency-svc-cw8qz [999.88347ms] Jul 20 02:36:03.505: INFO: Created: latency-svc-x6dvn Jul 20 02:36:03.519: INFO: Got endpoints: latency-svc-x6dvn [999.266812ms] Jul 20 02:36:03.541: INFO: Created: latency-svc-n8x4k Jul 20 02:36:03.611: INFO: Got endpoints: latency-svc-n8x4k [970.409588ms] Jul 20 02:36:03.626: INFO: Created: latency-svc-hvw85 Jul 20 02:36:03.639: INFO: Got endpoints: latency-svc-hvw85 [970.300034ms] Jul 20 02:36:03.668: INFO: Created: latency-svc-m8dxm Jul 20 02:36:03.682: INFO: Got endpoints: latency-svc-m8dxm [855.310648ms] Jul 20 02:36:03.709: INFO: Created: latency-svc-7c929 Jul 20 02:36:03.778: INFO: Got endpoints: latency-svc-7c929 [941.277897ms] Jul 20 02:36:03.782: INFO: Created: latency-svc-ndwqv Jul 20 02:36:03.790: INFO: Got endpoints: latency-svc-ndwqv [886.387349ms] Jul 20 02:36:03.811: INFO: Created: latency-svc-5lq24 Jul 20 02:36:03.836: INFO: Got endpoints: latency-svc-5lq24 [817.575604ms] Jul 20 02:36:03.866: INFO: Created: latency-svc-gpxrn Jul 20 02:36:03.947: INFO: Got endpoints: latency-svc-gpxrn [898.549716ms] Jul 20 02:36:03.967: INFO: Created: latency-svc-hhnbt Jul 20 02:36:03.983: INFO: Got endpoints: latency-svc-hhnbt [904.255564ms] Jul 20 02:36:04.003: INFO: Created: latency-svc-tphfk Jul 20 02:36:04.039: INFO: Got endpoints: latency-svc-tphfk [902.077499ms] Jul 20 02:36:04.121: INFO: Created: latency-svc-vcrg6 Jul 20 02:36:04.147: INFO: Got endpoints: latency-svc-vcrg6 [990.198109ms] Jul 20 02:36:04.195: INFO: Created: latency-svc-5h7l4 Jul 20 02:36:04.207: INFO: Got endpoints: latency-svc-5h7l4 [977.087678ms] Jul 20 02:36:04.263: INFO: Created: latency-svc-x54cj Jul 20 02:36:04.268: INFO: Got endpoints: latency-svc-x54cj [967.226313ms] Jul 20 02:36:04.321: INFO: Created: latency-svc-857k9 Jul 20 02:36:04.333: INFO: Got endpoints: latency-svc-857k9 [953.362623ms] Jul 20 02:36:04.351: INFO: Created: latency-svc-2v8jh Jul 20 02:36:04.419: INFO: Got endpoints: latency-svc-2v8jh [935.824415ms] Jul 20 02:36:04.427: INFO: Created: latency-svc-9dsbc Jul 20 02:36:04.454: INFO: Got endpoints: latency-svc-9dsbc [935.026816ms] Jul 20 02:36:04.477: INFO: Created: latency-svc-qwvkh Jul 20 02:36:04.489: INFO: Got endpoints: latency-svc-qwvkh [878.512772ms] Jul 20 02:36:04.512: INFO: Created: latency-svc-wxsmw Jul 20 02:36:04.587: INFO: Got endpoints: latency-svc-wxsmw [947.817588ms] Jul 20 02:36:04.603: INFO: Created: latency-svc-r9vl6 Jul 20 02:36:04.616: INFO: Got endpoints: latency-svc-r9vl6 [934.571663ms] Jul 20 02:36:04.645: INFO: Created: latency-svc-c49z7 Jul 20 02:36:04.670: INFO: Got endpoints: latency-svc-c49z7 [891.962228ms] Jul 20 02:36:04.743: INFO: Created: latency-svc-rjhp9 Jul 20 02:36:04.749: INFO: Got endpoints: latency-svc-rjhp9 [958.349846ms] Jul 20 02:36:04.953: INFO: Created: latency-svc-5dr7t Jul 20 02:36:04.965: INFO: Got endpoints: latency-svc-5dr7t [1.129447087s] Jul 20 02:36:04.993: INFO: Created: latency-svc-kv7jv Jul 20 02:36:05.011: INFO: Got endpoints: latency-svc-kv7jv [1.064372398s] Jul 20 02:36:05.041: INFO: Created: latency-svc-ds5cb Jul 20 02:36:05.083: INFO: Got endpoints: latency-svc-ds5cb [1.100353124s] Jul 20 02:36:05.102: INFO: Created: latency-svc-hsrxs Jul 20 02:36:05.116: INFO: Got endpoints: latency-svc-hsrxs [1.077184008s] Jul 20 02:36:05.137: INFO: Created: latency-svc-nz6mf Jul 20 02:36:05.146: INFO: Got endpoints: latency-svc-nz6mf [998.863391ms] Jul 20 02:36:05.166: INFO: Created: latency-svc-xxvzg Jul 20 02:36:05.176: INFO: Got endpoints: latency-svc-xxvzg [969.239113ms] Jul 20 02:36:05.241: INFO: Created: latency-svc-9h86b Jul 20 02:36:05.269: INFO: Got endpoints: latency-svc-9h86b [1.00177875s] Jul 20 02:36:05.335: INFO: Created: latency-svc-7ssxw Jul 20 02:36:05.421: INFO: Got endpoints: latency-svc-7ssxw [1.088257148s] Jul 20 02:36:05.422: INFO: Created: latency-svc-rh6kl Jul 20 02:36:05.441: INFO: Got endpoints: latency-svc-rh6kl [1.022186847s] Jul 20 02:36:05.491: INFO: Created: latency-svc-9jkts Jul 20 02:36:05.593: INFO: Got endpoints: latency-svc-9jkts [1.138921349s] Jul 20 02:36:05.598: INFO: Created: latency-svc-rj4t5 Jul 20 02:36:05.603: INFO: Got endpoints: latency-svc-rj4t5 [1.113496673s] Jul 20 02:36:05.625: INFO: Created: latency-svc-9wx5q Jul 20 02:36:05.639: INFO: Got endpoints: latency-svc-9wx5q [1.052270598s] Jul 20 02:36:05.665: INFO: Created: latency-svc-gggbd Jul 20 02:36:05.682: INFO: Got endpoints: latency-svc-gggbd [1.065589592s] Jul 20 02:36:05.754: INFO: Created: latency-svc-cjkvz Jul 20 02:36:05.773: INFO: Got endpoints: latency-svc-cjkvz [1.102208548s] Jul 20 02:36:05.809: INFO: Created: latency-svc-r62f8 Jul 20 02:36:05.814: INFO: Got endpoints: latency-svc-r62f8 [1.06508691s] Jul 20 02:36:05.832: INFO: Created: latency-svc-xrgtg Jul 20 02:36:05.845: INFO: Got endpoints: latency-svc-xrgtg [879.66288ms] Jul 20 02:36:05.886: INFO: Created: latency-svc-9htlv Jul 20 02:36:05.898: INFO: Got endpoints: latency-svc-9htlv [887.171419ms] Jul 20 02:36:05.929: INFO: Created: latency-svc-wdkz6 Jul 20 02:36:05.947: INFO: Got endpoints: latency-svc-wdkz6 [863.515163ms] Jul 20 02:36:05.964: INFO: Created: latency-svc-429vx Jul 20 02:36:05.977: INFO: Got endpoints: latency-svc-429vx [860.67108ms] Jul 20 02:36:06.072: INFO: Created: latency-svc-479vr Jul 20 02:36:06.077: INFO: Got endpoints: latency-svc-479vr [930.70084ms] Jul 20 02:36:06.102: INFO: Created: latency-svc-sc6q4 Jul 20 02:36:06.115: INFO: Got endpoints: latency-svc-sc6q4 [939.35463ms] Jul 20 02:36:06.138: INFO: Created: latency-svc-vz286 Jul 20 02:36:06.152: INFO: Got endpoints: latency-svc-vz286 [882.272317ms] Jul 20 02:36:06.210: INFO: Created: latency-svc-9dskl Jul 20 02:36:06.242: INFO: Got endpoints: latency-svc-9dskl [820.273791ms] Jul 20 02:36:06.242: INFO: Created: latency-svc-pgvn8 Jul 20 02:36:06.270: INFO: Got endpoints: latency-svc-pgvn8 [828.645181ms] Jul 20 02:36:06.300: INFO: Created: latency-svc-x5hrl Jul 20 02:36:06.359: INFO: Got endpoints: latency-svc-x5hrl [766.045483ms] Jul 20 02:36:06.384: INFO: Created: latency-svc-w5jjz Jul 20 02:36:06.399: INFO: Got endpoints: latency-svc-w5jjz [795.906155ms] Jul 20 02:36:06.421: INFO: Created: latency-svc-nbgvg Jul 20 02:36:06.430: INFO: Got endpoints: latency-svc-nbgvg [790.587202ms] Jul 20 02:36:06.533: INFO: Created: latency-svc-z6hsl Jul 20 02:36:06.552: INFO: Got endpoints: latency-svc-z6hsl [869.927195ms] Jul 20 02:36:06.583: INFO: Created: latency-svc-h5pm8 Jul 20 02:36:06.598: INFO: Got endpoints: latency-svc-h5pm8 [825.087851ms] Jul 20 02:36:06.618: INFO: Created: latency-svc-hwr9r Jul 20 02:36:06.702: INFO: Got endpoints: latency-svc-hwr9r [888.190942ms] Jul 20 02:36:06.720: INFO: Created: latency-svc-9rrpd Jul 20 02:36:06.743: INFO: Got endpoints: latency-svc-9rrpd [898.192795ms] Jul 20 02:36:06.845: INFO: Created: latency-svc-sh464 Jul 20 02:36:06.857: INFO: Got endpoints: latency-svc-sh464 [958.650159ms] Jul 20 02:36:06.889: INFO: Created: latency-svc-nr859 Jul 20 02:36:06.905: INFO: Got endpoints: latency-svc-nr859 [958.487164ms] Jul 20 02:36:06.931: INFO: Created: latency-svc-66hgj Jul 20 02:36:07.018: INFO: Got endpoints: latency-svc-66hgj [1.041004713s] Jul 20 02:36:07.021: INFO: Created: latency-svc-vzs88 Jul 20 02:36:07.026: INFO: Got endpoints: latency-svc-vzs88 [948.902523ms] Jul 20 02:36:07.044: INFO: Created: latency-svc-mxzm2 Jul 20 02:36:07.068: INFO: Got endpoints: latency-svc-mxzm2 [952.669679ms] Jul 20 02:36:07.093: INFO: Created: latency-svc-ckcnx Jul 20 02:36:07.105: INFO: Got endpoints: latency-svc-ckcnx [952.95842ms] Jul 20 02:36:07.162: INFO: Created: latency-svc-l8b7j Jul 20 02:36:07.166: INFO: Got endpoints: latency-svc-l8b7j [924.2413ms] Jul 20 02:36:07.188: INFO: Created: latency-svc-v2vhk Jul 20 02:36:07.201: INFO: Got endpoints: latency-svc-v2vhk [930.984681ms] Jul 20 02:36:07.218: INFO: Created: latency-svc-jmd24 Jul 20 02:36:07.231: INFO: Got endpoints: latency-svc-jmd24 [871.845269ms] Jul 20 02:36:07.248: INFO: Created: latency-svc-zjnqg Jul 20 02:36:07.335: INFO: Got endpoints: latency-svc-zjnqg [936.109541ms] Jul 20 02:36:07.337: INFO: Created: latency-svc-99x4d Jul 20 02:36:07.369: INFO: Got endpoints: latency-svc-99x4d [938.92967ms] Jul 20 02:36:07.417: INFO: Created: latency-svc-242qc Jul 20 02:36:07.509: INFO: Got endpoints: latency-svc-242qc [957.128665ms] Jul 20 02:36:07.511: INFO: Created: latency-svc-dt92z Jul 20 02:36:07.519: INFO: Got endpoints: latency-svc-dt92z [921.365275ms] Jul 20 02:36:07.548: INFO: Created: latency-svc-9676s Jul 20 02:36:07.562: INFO: Got endpoints: latency-svc-9676s [859.531345ms] Jul 20 02:36:07.584: INFO: Created: latency-svc-4g9kk Jul 20 02:36:07.598: INFO: Got endpoints: latency-svc-4g9kk [855.196733ms] Jul 20 02:36:07.653: INFO: Created: latency-svc-cm494 Jul 20 02:36:07.668: INFO: Got endpoints: latency-svc-cm494 [811.274981ms] Jul 20 02:36:07.704: INFO: Created: latency-svc-xd8xz Jul 20 02:36:07.719: INFO: Got endpoints: latency-svc-xd8xz [813.328474ms] Jul 20 02:36:07.791: INFO: Created: latency-svc-p664p Jul 20 02:36:07.803: INFO: Got endpoints: latency-svc-p664p [784.726383ms] Jul 20 02:36:07.836: INFO: Created: latency-svc-txxdt Jul 20 02:36:07.852: INFO: Got endpoints: latency-svc-txxdt [826.296544ms] Jul 20 02:36:07.940: INFO: Created: latency-svc-59z9w Jul 20 02:36:07.953: INFO: Got endpoints: latency-svc-59z9w [884.771786ms] Jul 20 02:36:07.974: INFO: Created: latency-svc-d7dps Jul 20 02:36:08.016: INFO: Got endpoints: latency-svc-d7dps [911.302365ms] Jul 20 02:36:08.084: INFO: Created: latency-svc-9fwhh Jul 20 02:36:08.088: INFO: Got endpoints: latency-svc-9fwhh [921.68758ms] Jul 20 02:36:08.118: INFO: Created: latency-svc-zsgfr Jul 20 02:36:08.134: INFO: Got endpoints: latency-svc-zsgfr [933.249756ms] Jul 20 02:36:08.154: INFO: Created: latency-svc-99hb4 Jul 20 02:36:08.171: INFO: Got endpoints: latency-svc-99hb4 [939.30645ms] Jul 20 02:36:08.246: INFO: Created: latency-svc-lxvc7 Jul 20 02:36:08.280: INFO: Got endpoints: latency-svc-lxvc7 [944.352626ms] Jul 20 02:36:08.280: INFO: Created: latency-svc-8xhb9 Jul 20 02:36:08.322: INFO: Got endpoints: latency-svc-8xhb9 [952.708761ms] Jul 20 02:36:08.397: INFO: Created: latency-svc-wl9zs Jul 20 02:36:08.418: INFO: Got endpoints: latency-svc-wl9zs [908.936726ms] Jul 20 02:36:08.418: INFO: Created: latency-svc-797qg Jul 20 02:36:08.454: INFO: Got endpoints: latency-svc-797qg [934.25444ms] Jul 20 02:36:08.545: INFO: Created: latency-svc-n4zft Jul 20 02:36:08.556: INFO: Got endpoints: latency-svc-n4zft [993.935736ms] Jul 20 02:36:08.604: INFO: Created: latency-svc-w8rns Jul 20 02:36:08.629: INFO: Got endpoints: latency-svc-w8rns [1.030298856s] Jul 20 02:36:08.689: INFO: Created: latency-svc-rm75f Jul 20 02:36:08.693: INFO: Got endpoints: latency-svc-rm75f [1.024340676s] Jul 20 02:36:08.754: INFO: Created: latency-svc-lkvd9 Jul 20 02:36:08.767: INFO: Got endpoints: latency-svc-lkvd9 [1.047735103s] Jul 20 02:36:08.832: INFO: Created: latency-svc-z88w2 Jul 20 02:36:08.845: INFO: Got endpoints: latency-svc-z88w2 [1.042294625s] Jul 20 02:36:08.874: INFO: Created: latency-svc-72qrt Jul 20 02:36:08.910: INFO: Got endpoints: latency-svc-72qrt [1.058076815s] Jul 20 02:36:08.971: INFO: Created: latency-svc-sp56m Jul 20 02:36:09.017: INFO: Got endpoints: latency-svc-sp56m [1.064194392s] Jul 20 02:36:09.019: INFO: Created: latency-svc-mk6xd Jul 20 02:36:09.047: INFO: Got endpoints: latency-svc-mk6xd [1.030909601s] Jul 20 02:36:09.121: INFO: Created: latency-svc-68vtf Jul 20 02:36:09.155: INFO: Got endpoints: latency-svc-68vtf [1.067796541s] Jul 20 02:36:09.187: INFO: Created: latency-svc-f7v6t Jul 20 02:36:09.200: INFO: Got endpoints: latency-svc-f7v6t [1.065693619s] Jul 20 02:36:09.200: INFO: Latencies: [71.515424ms 117.634271ms 143.257198ms 252.178398ms 329.477743ms 412.590873ms 448.691834ms 531.279977ms 572.244677ms 605.300113ms 668.596428ms 698.315023ms 740.50299ms 766.045483ms 784.726383ms 790.587202ms 795.906155ms 804.501202ms 805.099722ms 810.192578ms 810.97635ms 811.274981ms 813.328474ms 817.575604ms 820.273791ms 825.087851ms 826.296544ms 828.645181ms 833.456818ms 837.014592ms 841.094392ms 849.235522ms 855.196733ms 855.310648ms 855.804183ms 859.531345ms 860.67108ms 863.515163ms 866.586396ms 866.836739ms 869.927195ms 870.753742ms 871.845269ms 872.984168ms 874.907578ms 878.512772ms 878.657558ms 879.66288ms 879.822704ms 880.302999ms 882.272317ms 883.084721ms 884.771786ms 885.781592ms 886.387349ms 887.171419ms 888.190942ms 890.738925ms 891.962228ms 896.933272ms 897.25688ms 898.192795ms 898.549716ms 900.54957ms 902.077499ms 902.233856ms 902.63303ms 904.255564ms 907.707406ms 908.936726ms 911.302365ms 911.765027ms 911.774071ms 912.189025ms 914.193651ms 919.801836ms 919.922594ms 920.01425ms 920.067564ms 921.365275ms 921.483472ms 921.68758ms 924.2413ms 925.926295ms 925.992609ms 930.70084ms 930.984681ms 931.064903ms 933.249756ms 933.710784ms 934.25444ms 934.571663ms 935.026816ms 935.824415ms 936.109541ms 936.251264ms 936.290064ms 937.380707ms 938.501576ms 938.92967ms 939.30645ms 939.35463ms 941.075063ms 941.277897ms 944.324383ms 944.352626ms 945.83011ms 947.208384ms 947.817588ms 948.014226ms 948.344211ms 948.902523ms 952.646632ms 952.669679ms 952.708761ms 952.95842ms 953.362623ms 957.128665ms 958.349846ms 958.487164ms 958.650159ms 967.226313ms 969.239113ms 970.300034ms 970.409588ms 973.040006ms 973.134464ms 973.996047ms 975.415472ms 977.087678ms 979.625867ms 979.803571ms 982.612594ms 984.10279ms 986.836083ms 989.048268ms 990.198109ms 990.769609ms 993.935736ms 996.831607ms 998.863391ms 999.266812ms 999.88347ms 1.000984904s 1.00177875s 1.006953204s 1.010305278s 1.011485759s 1.011713729s 1.012386743s 1.015206964s 1.018913821s 1.022186847s 1.023074743s 1.023370163s 1.024340676s 1.02504079s 1.030298856s 1.030909601s 1.032367684s 1.037101038s 1.038998262s 1.041004713s 1.042294625s 1.047735103s 1.050658944s 1.051647559s 1.052270598s 1.054389309s 1.056342819s 1.058076815s 1.059227449s 1.06175516s 1.064194392s 1.064372398s 1.06508691s 1.065589592s 1.065693619s 1.066578671s 1.067796541s 1.070134824s 1.070606374s 1.072258646s 1.076799223s 1.076937593s 1.077184008s 1.080848129s 1.083418128s 1.085397493s 1.086916501s 1.088257148s 1.100353124s 1.102208548s 1.102315428s 1.111084757s 1.113496673s 1.128775532s 1.129447087s 1.138921349s 1.145433896s] Jul 20 02:36:09.200: INFO: 50 %ile: 939.30645ms Jul 20 02:36:09.200: INFO: 90 %ile: 1.070134824s Jul 20 02:36:09.200: INFO: 99 %ile: 1.138921349s Jul 20 02:36:09.200: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:36:09.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9706" for this suite. • [SLOW TEST:17.676 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":294,"completed":150,"skipped":2545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:36:09.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-8deda990-10d1-441c-931a-ce03b9fe43cc STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-8deda990-10d1-441c-931a-ce03b9fe43cc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:37:22.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7911" for this suite. • [SLOW TEST:72.860 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":151,"skipped":2575,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:37:22.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 02:37:23.158: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 02:37:25.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809443, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809443, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809443, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809443, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:37:28.214: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:37:28.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6610" for this suite. STEP: Destroying namespace "webhook-6610-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.911 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":294,"completed":152,"skipped":2588,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:37:28.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:37:28.592: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:37:29.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8489" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":294,"completed":153,"skipped":2589,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:37:29.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9165 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9165;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9165 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9165;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9165.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9165.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9165.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9165.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9165.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9165.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9165.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9165.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9165.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9165.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9165.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.146.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.146.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.146.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.146.32_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9165 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9165;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9165 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9165;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9165.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9165.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9165.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9165.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9165.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9165.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9165.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9165.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9165.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9165.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9165.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9165.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.146.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.146.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.146.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.146.32_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 02:37:38.269: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.273: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.276: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.279: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.282: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.285: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.287: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.289: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.322: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.325: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.327: INFO: Unable to read jessie_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.330: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.333: INFO: Unable to read jessie_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.336: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.339: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.342: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:38.360: INFO: Lookups using dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9165 wheezy_tcp@dns-test-service.dns-9165 wheezy_udp@dns-test-service.dns-9165.svc wheezy_tcp@dns-test-service.dns-9165.svc wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9165 jessie_tcp@dns-test-service.dns-9165 jessie_udp@dns-test-service.dns-9165.svc jessie_tcp@dns-test-service.dns-9165.svc jessie_udp@_http._tcp.dns-test-service.dns-9165.svc jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc] Jul 20 02:37:43.365: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.369: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.371: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.374: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.405: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.407: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.410: INFO: Unable to read jessie_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.417: INFO: Unable to read jessie_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.421: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.424: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.427: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:43.442: INFO: Lookups using dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9165 wheezy_tcp@dns-test-service.dns-9165 wheezy_udp@dns-test-service.dns-9165.svc wheezy_tcp@dns-test-service.dns-9165.svc wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9165 jessie_tcp@dns-test-service.dns-9165 jessie_udp@dns-test-service.dns-9165.svc jessie_tcp@dns-test-service.dns-9165.svc jessie_udp@_http._tcp.dns-test-service.dns-9165.svc jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc] Jul 20 02:37:48.365: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.368: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.371: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.378: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.382: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.388: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.408: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.410: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.413: INFO: Unable to read jessie_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.415: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.418: INFO: Unable to read jessie_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.421: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.423: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.426: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:48.442: INFO: Lookups using dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9165 wheezy_tcp@dns-test-service.dns-9165 wheezy_udp@dns-test-service.dns-9165.svc wheezy_tcp@dns-test-service.dns-9165.svc wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9165 jessie_tcp@dns-test-service.dns-9165 jessie_udp@dns-test-service.dns-9165.svc jessie_tcp@dns-test-service.dns-9165.svc jessie_udp@_http._tcp.dns-test-service.dns-9165.svc jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc] Jul 20 02:37:53.365: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.369: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.371: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.374: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.376: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.430: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.432: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.434: INFO: Unable to read jessie_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.437: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.439: INFO: Unable to read jessie_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.441: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.443: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.445: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:53.662: INFO: Lookups using dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9165 wheezy_tcp@dns-test-service.dns-9165 wheezy_udp@dns-test-service.dns-9165.svc wheezy_tcp@dns-test-service.dns-9165.svc wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9165 jessie_tcp@dns-test-service.dns-9165 jessie_udp@dns-test-service.dns-9165.svc jessie_tcp@dns-test-service.dns-9165.svc jessie_udp@_http._tcp.dns-test-service.dns-9165.svc jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc] Jul 20 02:37:58.365: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.368: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.381: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.383: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.405: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.407: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.410: INFO: Unable to read jessie_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.412: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.415: INFO: Unable to read jessie_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.417: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.419: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.421: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:37:58.437: INFO: Lookups using dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9165 wheezy_tcp@dns-test-service.dns-9165 wheezy_udp@dns-test-service.dns-9165.svc wheezy_tcp@dns-test-service.dns-9165.svc wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9165 jessie_tcp@dns-test-service.dns-9165 jessie_udp@dns-test-service.dns-9165.svc jessie_tcp@dns-test-service.dns-9165.svc jessie_udp@_http._tcp.dns-test-service.dns-9165.svc jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc] Jul 20 02:38:03.451: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.455: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.458: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.461: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.464: INFO: Unable to read wheezy_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.467: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.470: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.473: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.531: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.533: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.535: INFO: Unable to read jessie_udp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.537: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165 from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.540: INFO: Unable to read jessie_udp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.542: INFO: Unable to read jessie_tcp@dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.545: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.547: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc from pod dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2: the server could not find the requested resource (get pods dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2) Jul 20 02:38:03.562: INFO: Lookups using dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9165 wheezy_tcp@dns-test-service.dns-9165 wheezy_udp@dns-test-service.dns-9165.svc wheezy_tcp@dns-test-service.dns-9165.svc wheezy_udp@_http._tcp.dns-test-service.dns-9165.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9165.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9165 jessie_tcp@dns-test-service.dns-9165 jessie_udp@dns-test-service.dns-9165.svc jessie_tcp@dns-test-service.dns-9165.svc jessie_udp@_http._tcp.dns-test-service.dns-9165.svc jessie_tcp@_http._tcp.dns-test-service.dns-9165.svc] Jul 20 02:38:08.438: INFO: DNS probes using dns-9165/dns-test-8a80d911-b3fa-4a88-bcff-a6f1dcec15c2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:38:09.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9165" for this suite. • [SLOW TEST:39.172 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":294,"completed":154,"skipped":2599,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:38:09.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:38:09.192: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:38:09.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-460" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":294,"completed":155,"skipped":2602,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:38:09.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:39:09.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8112" for this suite. • [SLOW TEST:60.091 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":294,"completed":156,"skipped":2612,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:39:09.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1109 Jul 20 02:39:14.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1109 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jul 20 02:39:14.233: INFO: stderr: "I0720 02:39:14.140864 2145 log.go:181] (0xc000f17290) (0xc000e985a0) Create stream\nI0720 02:39:14.140923 2145 log.go:181] (0xc000f17290) (0xc000e985a0) Stream added, broadcasting: 1\nI0720 02:39:14.146279 2145 log.go:181] (0xc000f17290) Reply frame received for 1\nI0720 02:39:14.146329 2145 log.go:181] (0xc000f17290) (0xc0005d30e0) Create stream\nI0720 02:39:14.146346 2145 log.go:181] (0xc000f17290) (0xc0005d30e0) Stream added, broadcasting: 3\nI0720 02:39:14.147328 2145 log.go:181] (0xc000f17290) Reply frame received for 3\nI0720 02:39:14.147368 2145 log.go:181] (0xc000f17290) (0xc0004a5220) Create stream\nI0720 02:39:14.147382 2145 log.go:181] (0xc000f17290) (0xc0004a5220) Stream added, broadcasting: 5\nI0720 02:39:14.148201 2145 log.go:181] (0xc000f17290) Reply frame received for 5\nI0720 02:39:14.219088 2145 log.go:181] (0xc000f17290) Data frame received for 5\nI0720 02:39:14.219123 2145 log.go:181] (0xc0004a5220) (5) Data frame handling\nI0720 02:39:14.219145 2145 log.go:181] (0xc0004a5220) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0720 02:39:14.225759 2145 log.go:181] (0xc000f17290) Data frame received for 3\nI0720 02:39:14.225785 2145 log.go:181] (0xc0005d30e0) (3) Data frame handling\nI0720 02:39:14.225794 2145 log.go:181] (0xc0005d30e0) (3) Data frame sent\nI0720 02:39:14.226320 2145 log.go:181] (0xc000f17290) Data frame received for 5\nI0720 02:39:14.226353 2145 log.go:181] (0xc0004a5220) (5) Data frame handling\nI0720 02:39:14.226550 2145 log.go:181] (0xc000f17290) Data frame received for 3\nI0720 02:39:14.226561 2145 log.go:181] (0xc0005d30e0) (3) Data frame handling\nI0720 02:39:14.228295 2145 log.go:181] (0xc000f17290) Data frame received for 1\nI0720 02:39:14.228312 2145 log.go:181] (0xc000e985a0) (1) Data frame handling\nI0720 02:39:14.228318 2145 log.go:181] (0xc000e985a0) (1) Data frame sent\nI0720 02:39:14.228325 2145 log.go:181] (0xc000f17290) (0xc000e985a0) Stream removed, broadcasting: 1\nI0720 02:39:14.228409 2145 log.go:181] (0xc000f17290) Go away received\nI0720 02:39:14.228594 2145 log.go:181] (0xc000f17290) (0xc000e985a0) Stream removed, broadcasting: 1\nI0720 02:39:14.228614 2145 log.go:181] (0xc000f17290) (0xc0005d30e0) Stream removed, broadcasting: 3\nI0720 02:39:14.228620 2145 log.go:181] (0xc000f17290) (0xc0004a5220) Stream removed, broadcasting: 5\n" Jul 20 02:39:14.233: INFO: stdout: "iptables" Jul 20 02:39:14.233: INFO: proxyMode: iptables Jul 20 02:39:14.239: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:39:14.269: INFO: Pod kube-proxy-mode-detector still exists Jul 20 02:39:16.270: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 20 02:39:16.274: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1109 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1109 I0720 02:39:16.346920 8 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1109, replica count: 3 I0720 02:39:19.397299 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:39:22.397562 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 02:39:22.403: INFO: Creating new exec pod Jul 20 02:39:27.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1109 execpod-affinityk2bcb -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jul 20 02:39:27.685: INFO: stderr: "I0720 02:39:27.593032 2163 log.go:181] (0xc0006af550) (0xc0004e7860) Create stream\nI0720 02:39:27.593075 2163 log.go:181] (0xc0006af550) (0xc0004e7860) Stream added, broadcasting: 1\nI0720 02:39:27.599037 2163 log.go:181] (0xc0006af550) Reply frame received for 1\nI0720 02:39:27.599108 2163 log.go:181] (0xc0006af550) (0xc0003ea6e0) Create stream\nI0720 02:39:27.599128 2163 log.go:181] (0xc0006af550) (0xc0003ea6e0) Stream added, broadcasting: 3\nI0720 02:39:27.600208 2163 log.go:181] (0xc0006af550) Reply frame received for 3\nI0720 02:39:27.600238 2163 log.go:181] (0xc0006af550) (0xc0002e6000) Create stream\nI0720 02:39:27.600247 2163 log.go:181] (0xc0006af550) (0xc0002e6000) Stream added, broadcasting: 5\nI0720 02:39:27.601388 2163 log.go:181] (0xc0006af550) Reply frame received for 5\nI0720 02:39:27.678443 2163 log.go:181] (0xc0006af550) Data frame received for 5\nI0720 02:39:27.678492 2163 log.go:181] (0xc0002e6000) (5) Data frame handling\nI0720 02:39:27.678514 2163 log.go:181] (0xc0002e6000) (5) Data frame sent\nI0720 02:39:27.678527 2163 log.go:181] (0xc0006af550) Data frame received for 5\nI0720 02:39:27.678542 2163 log.go:181] (0xc0002e6000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0720 02:39:27.678588 2163 log.go:181] (0xc0002e6000) (5) Data frame sent\nI0720 02:39:27.678830 2163 log.go:181] (0xc0006af550) Data frame received for 5\nI0720 02:39:27.678855 2163 log.go:181] (0xc0002e6000) (5) Data frame handling\nI0720 02:39:27.679083 2163 log.go:181] (0xc0006af550) Data frame received for 3\nI0720 02:39:27.679114 2163 log.go:181] (0xc0003ea6e0) (3) Data frame handling\nI0720 02:39:27.681214 2163 log.go:181] (0xc0006af550) Data frame received for 1\nI0720 02:39:27.681233 2163 log.go:181] (0xc0004e7860) (1) Data frame handling\nI0720 02:39:27.681245 2163 log.go:181] (0xc0004e7860) (1) Data frame sent\nI0720 02:39:27.681257 2163 log.go:181] (0xc0006af550) (0xc0004e7860) Stream removed, broadcasting: 1\nI0720 02:39:27.681306 2163 log.go:181] (0xc0006af550) Go away received\nI0720 02:39:27.681567 2163 log.go:181] (0xc0006af550) (0xc0004e7860) Stream removed, broadcasting: 1\nI0720 02:39:27.681584 2163 log.go:181] (0xc0006af550) (0xc0003ea6e0) Stream removed, broadcasting: 3\nI0720 02:39:27.681594 2163 log.go:181] (0xc0006af550) (0xc0002e6000) Stream removed, broadcasting: 5\n" Jul 20 02:39:27.686: INFO: stdout: "" Jul 20 02:39:27.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1109 execpod-affinityk2bcb -- /bin/sh -x -c nc -zv -t -w 2 10.97.236.209 80' Jul 20 02:39:27.909: INFO: stderr: "I0720 02:39:27.830456 2181 log.go:181] (0xc00097f340) (0xc000f1e280) Create stream\nI0720 02:39:27.830525 2181 log.go:181] (0xc00097f340) (0xc000f1e280) Stream added, broadcasting: 1\nI0720 02:39:27.838851 2181 log.go:181] (0xc00097f340) Reply frame received for 1\nI0720 02:39:27.838925 2181 log.go:181] (0xc00097f340) (0xc0008c70e0) Create stream\nI0720 02:39:27.838979 2181 log.go:181] (0xc00097f340) (0xc0008c70e0) Stream added, broadcasting: 3\nI0720 02:39:27.839886 2181 log.go:181] (0xc00097f340) Reply frame received for 3\nI0720 02:39:27.839920 2181 log.go:181] (0xc00097f340) (0xc0008923c0) Create stream\nI0720 02:39:27.839930 2181 log.go:181] (0xc00097f340) (0xc0008923c0) Stream added, broadcasting: 5\nI0720 02:39:27.840824 2181 log.go:181] (0xc00097f340) Reply frame received for 5\nI0720 02:39:27.901757 2181 log.go:181] (0xc00097f340) Data frame received for 3\nI0720 02:39:27.901800 2181 log.go:181] (0xc0008c70e0) (3) Data frame handling\nI0720 02:39:27.901857 2181 log.go:181] (0xc00097f340) Data frame received for 5\nI0720 02:39:27.901889 2181 log.go:181] (0xc0008923c0) (5) Data frame handling\nI0720 02:39:27.901914 2181 log.go:181] (0xc0008923c0) (5) Data frame sent\nI0720 02:39:27.901935 2181 log.go:181] (0xc00097f340) Data frame received for 5\nI0720 02:39:27.901951 2181 log.go:181] (0xc0008923c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.236.209 80\nConnection to 10.97.236.209 80 port [tcp/http] succeeded!\nI0720 02:39:27.903191 2181 log.go:181] (0xc00097f340) Data frame received for 1\nI0720 02:39:27.903227 2181 log.go:181] (0xc000f1e280) (1) Data frame handling\nI0720 02:39:27.903257 2181 log.go:181] (0xc000f1e280) (1) Data frame sent\nI0720 02:39:27.903280 2181 log.go:181] (0xc00097f340) (0xc000f1e280) Stream removed, broadcasting: 1\nI0720 02:39:27.903309 2181 log.go:181] (0xc00097f340) Go away received\nI0720 02:39:27.903753 2181 log.go:181] (0xc00097f340) (0xc000f1e280) Stream removed, broadcasting: 1\nI0720 02:39:27.903772 2181 log.go:181] (0xc00097f340) (0xc0008c70e0) Stream removed, broadcasting: 3\nI0720 02:39:27.903790 2181 log.go:181] (0xc00097f340) (0xc0008923c0) Stream removed, broadcasting: 5\n" Jul 20 02:39:27.909: INFO: stdout: "" Jul 20 02:39:27.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1109 execpod-affinityk2bcb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.236.209:80/ ; done' Jul 20 02:39:28.220: INFO: stderr: "I0720 02:39:28.052302 2199 log.go:181] (0xc00013adc0) (0xc0009e5860) Create stream\nI0720 02:39:28.052372 2199 log.go:181] (0xc00013adc0) (0xc0009e5860) Stream added, broadcasting: 1\nI0720 02:39:28.056657 2199 log.go:181] (0xc00013adc0) Reply frame received for 1\nI0720 02:39:28.056704 2199 log.go:181] (0xc00013adc0) (0xc0003eb7c0) Create stream\nI0720 02:39:28.056717 2199 log.go:181] (0xc00013adc0) (0xc0003eb7c0) Stream added, broadcasting: 3\nI0720 02:39:28.057623 2199 log.go:181] (0xc00013adc0) Reply frame received for 3\nI0720 02:39:28.057657 2199 log.go:181] (0xc00013adc0) (0xc0001e06e0) Create stream\nI0720 02:39:28.057666 2199 log.go:181] (0xc00013adc0) (0xc0001e06e0) Stream added, broadcasting: 5\nI0720 02:39:28.058436 2199 log.go:181] (0xc00013adc0) Reply frame received for 5\nI0720 02:39:28.117047 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.117078 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.117088 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.117126 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.117168 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.117190 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.121063 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.121083 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.121101 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.121304 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.121317 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.121324 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.121401 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.121420 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.121436 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.127541 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.127561 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.127576 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.128174 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.128206 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.128217 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.128236 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.128258 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.128290 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.132369 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.132388 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.132401 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.132691 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.132705 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.132712 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.132807 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.132835 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.132859 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.137121 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.137137 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.137152 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.137738 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.137750 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.137762 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.137774 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.137782 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.137793 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.143420 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.143453 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.143479 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.145117 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.145142 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.145167 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.145177 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.145192 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.145202 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.151186 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.151206 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.151222 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.152016 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.152051 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.152075 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.152102 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.152133 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.152164 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.158372 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.158402 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.158422 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.158927 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.158965 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.158995 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.159042 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.159068 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.159096 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.163796 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.163856 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.163876 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.164499 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.164518 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.164535 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.165162 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.165186 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.165199 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.169775 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.169794 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.169806 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.170317 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.170347 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.170363 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.170377 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.170384 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.170392 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\nI0720 02:39:28.170402 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.170408 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.170424 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\nI0720 02:39:28.175680 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.175701 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.175717 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.175968 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.175977 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.175982 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.176011 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.176028 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.176035 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.180459 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.180492 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.180521 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.181239 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.181258 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.181266 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.181277 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.181285 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.181295 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.187048 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.187068 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.187079 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.187586 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.187609 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.187629 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.187639 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.187650 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.187665 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.194312 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.194335 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.194353 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.195027 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.195049 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.195063 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.195328 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.195358 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.195395 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.200404 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.200425 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.200439 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.201116 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.201141 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.201164 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.201181 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.201196 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.201209 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.205994 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.206019 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.206035 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.206533 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.206564 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.206585 2199 log.go:181] (0xc0001e06e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.206686 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.206702 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.206715 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.212473 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.212500 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.212518 2199 log.go:181] (0xc0003eb7c0) (3) Data frame sent\nI0720 02:39:28.213486 2199 log.go:181] (0xc00013adc0) Data frame received for 5\nI0720 02:39:28.213516 2199 log.go:181] (0xc0001e06e0) (5) Data frame handling\nI0720 02:39:28.213583 2199 log.go:181] (0xc00013adc0) Data frame received for 3\nI0720 02:39:28.213608 2199 log.go:181] (0xc0003eb7c0) (3) Data frame handling\nI0720 02:39:28.215475 2199 log.go:181] (0xc00013adc0) Data frame received for 1\nI0720 02:39:28.215495 2199 log.go:181] (0xc0009e5860) (1) Data frame handling\nI0720 02:39:28.215509 2199 log.go:181] (0xc0009e5860) (1) Data frame sent\nI0720 02:39:28.215528 2199 log.go:181] (0xc00013adc0) (0xc0009e5860) Stream removed, broadcasting: 1\nI0720 02:39:28.215545 2199 log.go:181] (0xc00013adc0) Go away received\nI0720 02:39:28.215966 2199 log.go:181] (0xc00013adc0) (0xc0009e5860) Stream removed, broadcasting: 1\nI0720 02:39:28.215989 2199 log.go:181] (0xc00013adc0) (0xc0003eb7c0) Stream removed, broadcasting: 3\nI0720 02:39:28.216001 2199 log.go:181] (0xc00013adc0) (0xc0001e06e0) Stream removed, broadcasting: 5\n" Jul 20 02:39:28.221: INFO: stdout: "\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl\naffinity-clusterip-timeout-cpfxl" Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Received response from host: affinity-clusterip-timeout-cpfxl Jul 20 02:39:28.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1109 execpod-affinityk2bcb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.236.209:80/' Jul 20 02:39:28.454: INFO: stderr: "I0720 02:39:28.365790 2217 log.go:181] (0xc000f0ef20) (0xc000a9a820) Create stream\nI0720 02:39:28.365867 2217 log.go:181] (0xc000f0ef20) (0xc000a9a820) Stream added, broadcasting: 1\nI0720 02:39:28.371716 2217 log.go:181] (0xc000f0ef20) Reply frame received for 1\nI0720 02:39:28.371765 2217 log.go:181] (0xc000f0ef20) (0xc00078e280) Create stream\nI0720 02:39:28.371778 2217 log.go:181] (0xc000f0ef20) (0xc00078e280) Stream added, broadcasting: 3\nI0720 02:39:28.372910 2217 log.go:181] (0xc000f0ef20) Reply frame received for 3\nI0720 02:39:28.372948 2217 log.go:181] (0xc000f0ef20) (0xc00056cc80) Create stream\nI0720 02:39:28.372957 2217 log.go:181] (0xc000f0ef20) (0xc00056cc80) Stream added, broadcasting: 5\nI0720 02:39:28.374129 2217 log.go:181] (0xc000f0ef20) Reply frame received for 5\nI0720 02:39:28.442606 2217 log.go:181] (0xc000f0ef20) Data frame received for 5\nI0720 02:39:28.442635 2217 log.go:181] (0xc00056cc80) (5) Data frame handling\nI0720 02:39:28.442647 2217 log.go:181] (0xc00056cc80) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:28.447341 2217 log.go:181] (0xc000f0ef20) Data frame received for 3\nI0720 02:39:28.447369 2217 log.go:181] (0xc00078e280) (3) Data frame handling\nI0720 02:39:28.447445 2217 log.go:181] (0xc00078e280) (3) Data frame sent\nI0720 02:39:28.448035 2217 log.go:181] (0xc000f0ef20) Data frame received for 5\nI0720 02:39:28.448072 2217 log.go:181] (0xc00056cc80) (5) Data frame handling\nI0720 02:39:28.448099 2217 log.go:181] (0xc000f0ef20) Data frame received for 3\nI0720 02:39:28.448117 2217 log.go:181] (0xc00078e280) (3) Data frame handling\nI0720 02:39:28.449634 2217 log.go:181] (0xc000f0ef20) Data frame received for 1\nI0720 02:39:28.449661 2217 log.go:181] (0xc000a9a820) (1) Data frame handling\nI0720 02:39:28.449690 2217 log.go:181] (0xc000a9a820) (1) Data frame sent\nI0720 02:39:28.449703 2217 log.go:181] (0xc000f0ef20) (0xc000a9a820) Stream removed, broadcasting: 1\nI0720 02:39:28.449878 2217 log.go:181] (0xc000f0ef20) Go away received\nI0720 02:39:28.450090 2217 log.go:181] (0xc000f0ef20) (0xc000a9a820) Stream removed, broadcasting: 1\nI0720 02:39:28.450104 2217 log.go:181] (0xc000f0ef20) (0xc00078e280) Stream removed, broadcasting: 3\nI0720 02:39:28.450110 2217 log.go:181] (0xc000f0ef20) (0xc00056cc80) Stream removed, broadcasting: 5\n" Jul 20 02:39:28.454: INFO: stdout: "affinity-clusterip-timeout-cpfxl" Jul 20 02:39:43.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1109 execpod-affinityk2bcb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.236.209:80/' Jul 20 02:39:43.675: INFO: stderr: "I0720 02:39:43.581804 2235 log.go:181] (0xc000646fd0) (0xc000c0d860) Create stream\nI0720 02:39:43.581852 2235 log.go:181] (0xc000646fd0) (0xc000c0d860) Stream added, broadcasting: 1\nI0720 02:39:43.586427 2235 log.go:181] (0xc000646fd0) Reply frame received for 1\nI0720 02:39:43.586475 2235 log.go:181] (0xc000646fd0) (0xc0004b3360) Create stream\nI0720 02:39:43.586486 2235 log.go:181] (0xc000646fd0) (0xc0004b3360) Stream added, broadcasting: 3\nI0720 02:39:43.587627 2235 log.go:181] (0xc000646fd0) Reply frame received for 3\nI0720 02:39:43.587658 2235 log.go:181] (0xc000646fd0) (0xc000478be0) Create stream\nI0720 02:39:43.587670 2235 log.go:181] (0xc000646fd0) (0xc000478be0) Stream added, broadcasting: 5\nI0720 02:39:43.588485 2235 log.go:181] (0xc000646fd0) Reply frame received for 5\nI0720 02:39:43.661639 2235 log.go:181] (0xc000646fd0) Data frame received for 5\nI0720 02:39:43.661672 2235 log.go:181] (0xc000478be0) (5) Data frame handling\nI0720 02:39:43.661691 2235 log.go:181] (0xc000478be0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.236.209:80/\nI0720 02:39:43.666995 2235 log.go:181] (0xc000646fd0) Data frame received for 3\nI0720 02:39:43.667032 2235 log.go:181] (0xc0004b3360) (3) Data frame handling\nI0720 02:39:43.667054 2235 log.go:181] (0xc0004b3360) (3) Data frame sent\nI0720 02:39:43.667426 2235 log.go:181] (0xc000646fd0) Data frame received for 5\nI0720 02:39:43.667453 2235 log.go:181] (0xc000478be0) (5) Data frame handling\nI0720 02:39:43.667495 2235 log.go:181] (0xc000646fd0) Data frame received for 3\nI0720 02:39:43.667515 2235 log.go:181] (0xc0004b3360) (3) Data frame handling\nI0720 02:39:43.669395 2235 log.go:181] (0xc000646fd0) Data frame received for 1\nI0720 02:39:43.669429 2235 log.go:181] (0xc000c0d860) (1) Data frame handling\nI0720 02:39:43.669453 2235 log.go:181] (0xc000c0d860) (1) Data frame sent\nI0720 02:39:43.669477 2235 log.go:181] (0xc000646fd0) (0xc000c0d860) Stream removed, broadcasting: 1\nI0720 02:39:43.669511 2235 log.go:181] (0xc000646fd0) Go away received\nI0720 02:39:43.669835 2235 log.go:181] (0xc000646fd0) (0xc000c0d860) Stream removed, broadcasting: 1\nI0720 02:39:43.669866 2235 log.go:181] (0xc000646fd0) (0xc0004b3360) Stream removed, broadcasting: 3\nI0720 02:39:43.669876 2235 log.go:181] (0xc000646fd0) (0xc000478be0) Stream removed, broadcasting: 5\n" Jul 20 02:39:43.675: INFO: stdout: "affinity-clusterip-timeout-lqmnr" Jul 20 02:39:43.675: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1109, will wait for the garbage collector to delete the pods Jul 20 02:39:43.789: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 7.480199ms Jul 20 02:39:44.290: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.309863ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:39:53.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1109" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:44.077 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":157,"skipped":2614,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:39:53.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jul 20 02:39:54.071: INFO: Created pod &Pod{ObjectMeta:{dns-7817 dns-7817 /api/v1/namespaces/dns-7817/pods/dns-7817 f33c771f-744e-424d-8436-20cb6f50d1ba 102915 0 2020-07-20 02:39:54 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-07-20 02:39:54 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l4m4f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l4m4f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l4m4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:39:54.090: INFO: The status of Pod dns-7817 is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:39:56.117: INFO: The status of Pod dns-7817 is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:39:58.094: INFO: The status of Pod dns-7817 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jul 20 02:39:58.094: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7817 PodName:dns-7817 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:39:58.094: INFO: >>> kubeConfig: /root/.kube/config I0720 02:39:58.135216 8 log.go:181] (0xc0026f7c30) (0xc000673ea0) Create stream I0720 02:39:58.135256 8 log.go:181] (0xc0026f7c30) (0xc000673ea0) Stream added, broadcasting: 1 I0720 02:39:58.137416 8 log.go:181] (0xc0026f7c30) Reply frame received for 1 I0720 02:39:58.137469 8 log.go:181] (0xc0026f7c30) (0xc001f5a320) Create stream I0720 02:39:58.137485 8 log.go:181] (0xc0026f7c30) (0xc001f5a320) Stream added, broadcasting: 3 I0720 02:39:58.138478 8 log.go:181] (0xc0026f7c30) Reply frame received for 3 I0720 02:39:58.138520 8 log.go:181] (0xc0026f7c30) (0xc001f5a460) Create stream I0720 02:39:58.138534 8 log.go:181] (0xc0026f7c30) (0xc001f5a460) Stream added, broadcasting: 5 I0720 02:39:58.139384 8 log.go:181] (0xc0026f7c30) Reply frame received for 5 I0720 02:39:58.229780 8 log.go:181] (0xc0026f7c30) Data frame received for 3 I0720 02:39:58.229808 8 log.go:181] (0xc001f5a320) (3) Data frame handling I0720 02:39:58.229828 8 log.go:181] (0xc001f5a320) (3) Data frame sent I0720 02:39:58.231016 8 log.go:181] (0xc0026f7c30) Data frame received for 5 I0720 02:39:58.231077 8 log.go:181] (0xc001f5a460) (5) Data frame handling I0720 02:39:58.231111 8 log.go:181] (0xc0026f7c30) Data frame received for 3 I0720 02:39:58.231129 8 log.go:181] (0xc001f5a320) (3) Data frame handling I0720 02:39:58.232809 8 log.go:181] (0xc0026f7c30) Data frame received for 1 I0720 02:39:58.232827 8 log.go:181] (0xc000673ea0) (1) Data frame handling I0720 02:39:58.232842 8 log.go:181] (0xc000673ea0) (1) Data frame sent I0720 02:39:58.232856 8 log.go:181] (0xc0026f7c30) (0xc000673ea0) Stream removed, broadcasting: 1 I0720 02:39:58.232869 8 log.go:181] (0xc0026f7c30) Go away received I0720 02:39:58.233003 8 log.go:181] (0xc0026f7c30) (0xc000673ea0) Stream removed, broadcasting: 1 I0720 02:39:58.233025 8 log.go:181] (0xc0026f7c30) (0xc001f5a320) Stream removed, broadcasting: 3 I0720 02:39:58.233033 8 log.go:181] (0xc0026f7c30) (0xc001f5a460) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jul 20 02:39:58.233: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7817 PodName:dns-7817 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:39:58.233: INFO: >>> kubeConfig: /root/.kube/config I0720 02:39:58.272397 8 log.go:181] (0xc0024b84d0) (0xc002793360) Create stream I0720 02:39:58.272449 8 log.go:181] (0xc0024b84d0) (0xc002793360) Stream added, broadcasting: 1 I0720 02:39:58.274366 8 log.go:181] (0xc0024b84d0) Reply frame received for 1 I0720 02:39:58.274436 8 log.go:181] (0xc0024b84d0) (0xc0027934a0) Create stream I0720 02:39:58.274454 8 log.go:181] (0xc0024b84d0) (0xc0027934a0) Stream added, broadcasting: 3 I0720 02:39:58.275387 8 log.go:181] (0xc0024b84d0) Reply frame received for 3 I0720 02:39:58.275442 8 log.go:181] (0xc0024b84d0) (0xc000673f40) Create stream I0720 02:39:58.275466 8 log.go:181] (0xc0024b84d0) (0xc000673f40) Stream added, broadcasting: 5 I0720 02:39:58.276256 8 log.go:181] (0xc0024b84d0) Reply frame received for 5 I0720 02:39:58.350645 8 log.go:181] (0xc0024b84d0) Data frame received for 3 I0720 02:39:58.350676 8 log.go:181] (0xc0027934a0) (3) Data frame handling I0720 02:39:58.350694 8 log.go:181] (0xc0027934a0) (3) Data frame sent I0720 02:39:58.351878 8 log.go:181] (0xc0024b84d0) Data frame received for 5 I0720 02:39:58.351896 8 log.go:181] (0xc000673f40) (5) Data frame handling I0720 02:39:58.352182 8 log.go:181] (0xc0024b84d0) Data frame received for 3 I0720 02:39:58.352192 8 log.go:181] (0xc0027934a0) (3) Data frame handling I0720 02:39:58.353821 8 log.go:181] (0xc0024b84d0) Data frame received for 1 I0720 02:39:58.353838 8 log.go:181] (0xc002793360) (1) Data frame handling I0720 02:39:58.353856 8 log.go:181] (0xc002793360) (1) Data frame sent I0720 02:39:58.353876 8 log.go:181] (0xc0024b84d0) (0xc002793360) Stream removed, broadcasting: 1 I0720 02:39:58.353972 8 log.go:181] (0xc0024b84d0) (0xc002793360) Stream removed, broadcasting: 1 I0720 02:39:58.353985 8 log.go:181] (0xc0024b84d0) (0xc0027934a0) Stream removed, broadcasting: 3 I0720 02:39:58.354146 8 log.go:181] (0xc0024b84d0) (0xc000673f40) Stream removed, broadcasting: 5 Jul 20 02:39:58.354: INFO: Deleting pod dns-7817... I0720 02:39:58.354193 8 log.go:181] (0xc0024b84d0) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:39:58.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7817" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":294,"completed":158,"skipped":2626,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:39:58.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 02:39:59.399: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 02:40:01.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809599, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809599, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809599, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809599, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:40:04.456: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:40:04.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3983-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:40:05.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7491" for this suite. STEP: Destroying namespace "webhook-7491-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.350 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":294,"completed":159,"skipped":2628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:40:05.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 02:40:06.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dbe1395-3b13-4c66-a853-95e42419b79a" in namespace "projected-1982" to be "Succeeded or Failed" Jul 20 02:40:06.192: INFO: Pod "downwardapi-volume-0dbe1395-3b13-4c66-a853-95e42419b79a": Phase="Pending", Reason="", readiness=false. Elapsed: 87.810698ms Jul 20 02:40:08.196: INFO: Pod "downwardapi-volume-0dbe1395-3b13-4c66-a853-95e42419b79a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091980451s Jul 20 02:40:10.200: INFO: Pod "downwardapi-volume-0dbe1395-3b13-4c66-a853-95e42419b79a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09635516s STEP: Saw pod success Jul 20 02:40:10.200: INFO: Pod "downwardapi-volume-0dbe1395-3b13-4c66-a853-95e42419b79a" satisfied condition "Succeeded or Failed" Jul 20 02:40:10.203: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0dbe1395-3b13-4c66-a853-95e42419b79a container client-container: STEP: delete the pod Jul 20 02:40:10.243: INFO: Waiting for pod downwardapi-volume-0dbe1395-3b13-4c66-a853-95e42419b79a to disappear Jul 20 02:40:10.247: INFO: Pod downwardapi-volume-0dbe1395-3b13-4c66-a853-95e42419b79a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:40:10.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1982" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":160,"skipped":2651,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:40:10.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 20 02:40:10.441: INFO: PodSpec: initContainers in spec.initContainers Jul 20 02:41:03.453: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-efee79d1-92e3-4b18-95cb-ca75013c7545", GenerateName:"", Namespace:"init-container-7924", SelfLink:"/api/v1/namespaces/init-container-7924/pods/pod-init-efee79d1-92e3-4b18-95cb-ca75013c7545", UID:"0dfedffe-b992-4562-881c-9792d2bf32fb", ResourceVersion:"103296", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730809610, loc:(*time.Location)(0x7deddc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"441021324"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0024ff800), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0024ff820)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0024ff860), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0024ff8a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-w5h88", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0030d19c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w5h88", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w5h88", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w5h88", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004a0c138), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002458b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a0c1c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a0c1e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004a0c1e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004a0c1ec), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0057857b0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809610, loc:(*time.Location)(0x7deddc0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809610, loc:(*time.Location)(0x7deddc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809610, loc:(*time.Location)(0x7deddc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730809610, loc:(*time.Location)(0x7deddc0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.12", PodIP:"10.244.2.13", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.13"}}, StartTime:(*v1.Time)(0xc0024ff8e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0024ff920), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002458c40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e32e76d7c1874fd589973efde644edb5b8c02834f77fcccb16a2b33c7c81c697", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024ff940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024ff900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004a0c26f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:41:03.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7924" for this suite. • [SLOW TEST:53.270 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":294,"completed":161,"skipped":2654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:41:03.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-b545 STEP: Creating a pod to test atomic-volume-subpath Jul 20 02:41:03.646: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-b545" in namespace "subpath-6863" to be "Succeeded or Failed" Jul 20 02:41:03.660: INFO: Pod "pod-subpath-test-secret-b545": Phase="Pending", Reason="", readiness=false. Elapsed: 14.685102ms Jul 20 02:41:05.723: INFO: Pod "pod-subpath-test-secret-b545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07690578s Jul 20 02:41:07.727: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 4.081302586s Jul 20 02:41:09.731: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 6.08551328s Jul 20 02:41:11.736: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 8.089873203s Jul 20 02:41:13.740: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 10.094130956s Jul 20 02:41:15.743: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 12.097561191s Jul 20 02:41:17.747: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 14.101662133s Jul 20 02:41:19.752: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 16.10574967s Jul 20 02:41:21.755: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 18.10952266s Jul 20 02:41:23.760: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 20.113806199s Jul 20 02:41:25.764: INFO: Pod "pod-subpath-test-secret-b545": Phase="Running", Reason="", readiness=true. Elapsed: 22.118356901s Jul 20 02:41:27.769: INFO: Pod "pod-subpath-test-secret-b545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.123017132s STEP: Saw pod success Jul 20 02:41:27.769: INFO: Pod "pod-subpath-test-secret-b545" satisfied condition "Succeeded or Failed" Jul 20 02:41:27.772: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-b545 container test-container-subpath-secret-b545: STEP: delete the pod Jul 20 02:41:27.886: INFO: Waiting for pod pod-subpath-test-secret-b545 to disappear Jul 20 02:41:27.893: INFO: Pod pod-subpath-test-secret-b545 no longer exists STEP: Deleting pod pod-subpath-test-secret-b545 Jul 20 02:41:27.893: INFO: Deleting pod "pod-subpath-test-secret-b545" in namespace "subpath-6863" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:41:27.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6863" for this suite. • [SLOW TEST:24.376 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":294,"completed":162,"skipped":2682,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:41:27.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jul 20 02:41:34.487: INFO: Successfully updated pod "adopt-release-ls56m" STEP: Checking that the Job readopts the Pod Jul 20 02:41:34.487: INFO: Waiting up to 15m0s for pod "adopt-release-ls56m" in namespace "job-706" to be "adopted" Jul 20 02:41:34.511: INFO: Pod "adopt-release-ls56m": Phase="Running", Reason="", readiness=true. Elapsed: 23.453376ms Jul 20 02:41:36.515: INFO: Pod "adopt-release-ls56m": Phase="Running", Reason="", readiness=true. Elapsed: 2.027737287s Jul 20 02:41:36.515: INFO: Pod "adopt-release-ls56m" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jul 20 02:41:37.026: INFO: Successfully updated pod "adopt-release-ls56m" STEP: Checking that the Job releases the Pod Jul 20 02:41:37.027: INFO: Waiting up to 15m0s for pod "adopt-release-ls56m" in namespace "job-706" to be "released" Jul 20 02:41:37.047: INFO: Pod "adopt-release-ls56m": Phase="Running", Reason="", readiness=true. Elapsed: 20.877081ms Jul 20 02:41:39.051: INFO: Pod "adopt-release-ls56m": Phase="Running", Reason="", readiness=true. Elapsed: 2.024786611s Jul 20 02:41:39.051: INFO: Pod "adopt-release-ls56m" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:41:39.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-706" for this suite. • [SLOW TEST:11.157 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":294,"completed":163,"skipped":2684,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:41:39.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:41:39.360: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-9c8dc317-16de-4f84-ba2e-19ce3c4c2454" in namespace "security-context-test-737" to be "Succeeded or Failed" Jul 20 02:41:39.367: INFO: Pod "busybox-readonly-false-9c8dc317-16de-4f84-ba2e-19ce3c4c2454": Phase="Pending", Reason="", readiness=false. Elapsed: 6.804309ms Jul 20 02:41:41.370: INFO: Pod "busybox-readonly-false-9c8dc317-16de-4f84-ba2e-19ce3c4c2454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010164761s Jul 20 02:41:43.381: INFO: Pod "busybox-readonly-false-9c8dc317-16de-4f84-ba2e-19ce3c4c2454": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021369162s Jul 20 02:41:43.381: INFO: Pod "busybox-readonly-false-9c8dc317-16de-4f84-ba2e-19ce3c4c2454" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:41:43.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-737" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":294,"completed":164,"skipped":2693,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:41:43.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 20 02:41:43.459: INFO: Waiting up to 5m0s for pod "pod-8e17ac3c-9611-47ed-bc46-7e925f6a3956" in namespace "emptydir-6879" to be "Succeeded or Failed" Jul 20 02:41:43.472: INFO: Pod "pod-8e17ac3c-9611-47ed-bc46-7e925f6a3956": Phase="Pending", Reason="", readiness=false. Elapsed: 12.841908ms Jul 20 02:41:45.475: INFO: Pod "pod-8e17ac3c-9611-47ed-bc46-7e925f6a3956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016145029s Jul 20 02:41:47.484: INFO: Pod "pod-8e17ac3c-9611-47ed-bc46-7e925f6a3956": Phase="Running", Reason="", readiness=true. Elapsed: 4.024909724s Jul 20 02:41:49.487: INFO: Pod "pod-8e17ac3c-9611-47ed-bc46-7e925f6a3956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028186175s STEP: Saw pod success Jul 20 02:41:49.487: INFO: Pod "pod-8e17ac3c-9611-47ed-bc46-7e925f6a3956" satisfied condition "Succeeded or Failed" Jul 20 02:41:49.489: INFO: Trying to get logs from node latest-worker2 pod pod-8e17ac3c-9611-47ed-bc46-7e925f6a3956 container test-container: STEP: delete the pod Jul 20 02:41:49.524: INFO: Waiting for pod pod-8e17ac3c-9611-47ed-bc46-7e925f6a3956 to disappear Jul 20 02:41:49.555: INFO: Pod pod-8e17ac3c-9611-47ed-bc46-7e925f6a3956 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:41:49.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6879" for this suite. • [SLOW TEST:6.176 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":165,"skipped":2711,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:41:49.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 20 02:41:49.628: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 02:41:49.643: INFO: Waiting for terminating namespaces to be deleted... Jul 20 02:41:49.645: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 20 02:41:49.649: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.649: INFO: Container coredns ready: true, restart count 0 Jul 20 02:41:49.649: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.649: INFO: Container coredns ready: true, restart count 0 Jul 20 02:41:49.649: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.649: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 02:41:49.649: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.649: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 02:41:49.649: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.649: INFO: Container local-path-provisioner ready: true, restart count 0 Jul 20 02:41:49.649: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 20 02:41:49.653: INFO: adopt-release-chvdm from job-706 started at 2020-07-20 02:41:37 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.653: INFO: Container c ready: true, restart count 0 Jul 20 02:41:49.653: INFO: adopt-release-ls56m from job-706 started at 2020-07-20 02:41:28 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.653: INFO: Container c ready: true, restart count 0 Jul 20 02:41:49.653: INFO: adopt-release-r2shq from job-706 started at 2020-07-20 02:41:28 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.653: INFO: Container c ready: true, restart count 0 Jul 20 02:41:49.653: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.653: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 02:41:49.653: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Jul 20 02:41:49.653: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-bc1482d9-1edf-4a0c-be65-646c1034ce95 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-bc1482d9-1edf-4a0c-be65-646c1034ce95 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-bc1482d9-1edf-4a0c-be65-646c1034ce95 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:46:57.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5551" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.298 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":294,"completed":166,"skipped":2711,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:46:57.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 20 02:46:58.001: INFO: Waiting up to 5m0s for pod "pod-b8bcee97-ecc2-435a-9e96-06782f3c8ad7" in namespace "emptydir-2301" to be "Succeeded or Failed" Jul 20 02:46:58.046: INFO: Pod "pod-b8bcee97-ecc2-435a-9e96-06782f3c8ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.500139ms Jul 20 02:47:00.050: INFO: Pod "pod-b8bcee97-ecc2-435a-9e96-06782f3c8ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048923565s Jul 20 02:47:02.072: INFO: Pod "pod-b8bcee97-ecc2-435a-9e96-06782f3c8ad7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070646495s STEP: Saw pod success Jul 20 02:47:02.072: INFO: Pod "pod-b8bcee97-ecc2-435a-9e96-06782f3c8ad7" satisfied condition "Succeeded or Failed" Jul 20 02:47:02.085: INFO: Trying to get logs from node latest-worker2 pod pod-b8bcee97-ecc2-435a-9e96-06782f3c8ad7 container test-container: STEP: delete the pod Jul 20 02:47:02.423: INFO: Waiting for pod pod-b8bcee97-ecc2-435a-9e96-06782f3c8ad7 to disappear Jul 20 02:47:02.426: INFO: Pod pod-b8bcee97-ecc2-435a-9e96-06782f3c8ad7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:47:02.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2301" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":167,"skipped":2772,"failed":0} ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:47:02.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8340, will wait for the garbage collector to delete the pods Jul 20 02:47:08.565: INFO: Deleting Job.batch foo took: 6.557146ms Jul 20 02:47:08.965: INFO: Terminating Job.batch foo pods took: 400.322945ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:47:42.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8340" for this suite. • [SLOW TEST:39.741 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":294,"completed":168,"skipped":2772,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:47:42.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jul 20 02:47:42.333: INFO: starting watch STEP: patching STEP: updating Jul 20 02:47:42.417: INFO: waiting for watch events with expected annotations Jul 20 02:47:42.417: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:47:42.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-2498" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":294,"completed":169,"skipped":2786,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:47:42.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 02:47:42.511: INFO: Waiting up to 5m0s for pod "downwardapi-volume-011e782a-ae47-43d9-a9e4-73043f0b39e6" in namespace "projected-8935" to be "Succeeded or Failed" Jul 20 02:47:42.548: INFO: Pod "downwardapi-volume-011e782a-ae47-43d9-a9e4-73043f0b39e6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.958617ms Jul 20 02:47:44.553: INFO: Pod "downwardapi-volume-011e782a-ae47-43d9-a9e4-73043f0b39e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041182385s Jul 20 02:47:46.557: INFO: Pod "downwardapi-volume-011e782a-ae47-43d9-a9e4-73043f0b39e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045353829s STEP: Saw pod success Jul 20 02:47:46.557: INFO: Pod "downwardapi-volume-011e782a-ae47-43d9-a9e4-73043f0b39e6" satisfied condition "Succeeded or Failed" Jul 20 02:47:46.560: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-011e782a-ae47-43d9-a9e4-73043f0b39e6 container client-container: STEP: delete the pod Jul 20 02:47:46.597: INFO: Waiting for pod downwardapi-volume-011e782a-ae47-43d9-a9e4-73043f0b39e6 to disappear Jul 20 02:47:46.705: INFO: Pod downwardapi-volume-011e782a-ae47-43d9-a9e4-73043f0b39e6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:47:46.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8935" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":170,"skipped":2798,"failed":0} ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:47:46.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-701cd5c1-31d5-45d5-8549-0b86dff9a224 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:47:53.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8094" for this suite. • [SLOW TEST:6.325 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":171,"skipped":2798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:47:53.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-hhxg STEP: Creating a pod to test atomic-volume-subpath Jul 20 02:47:53.497: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hhxg" in namespace "subpath-5143" to be "Succeeded or Failed" Jul 20 02:47:53.597: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Pending", Reason="", readiness=false. Elapsed: 100.492197ms Jul 20 02:47:55.601: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104594665s Jul 20 02:47:57.607: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 4.109820287s Jul 20 02:47:59.611: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 6.114453641s Jul 20 02:48:01.616: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 8.118766358s Jul 20 02:48:03.619: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 10.122346318s Jul 20 02:48:05.623: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 12.125740694s Jul 20 02:48:07.626: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 14.129681908s Jul 20 02:48:09.631: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 16.134410523s Jul 20 02:48:11.635: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 18.138350701s Jul 20 02:48:13.639: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 20.141942375s Jul 20 02:48:15.643: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Running", Reason="", readiness=true. Elapsed: 22.146296771s Jul 20 02:48:17.648: INFO: Pod "pod-subpath-test-downwardapi-hhxg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.150966124s STEP: Saw pod success Jul 20 02:48:17.648: INFO: Pod "pod-subpath-test-downwardapi-hhxg" satisfied condition "Succeeded or Failed" Jul 20 02:48:17.651: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-hhxg container test-container-subpath-downwardapi-hhxg: STEP: delete the pod Jul 20 02:48:17.669: INFO: Waiting for pod pod-subpath-test-downwardapi-hhxg to disappear Jul 20 02:48:17.673: INFO: Pod pod-subpath-test-downwardapi-hhxg no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-hhxg Jul 20 02:48:17.673: INFO: Deleting pod "pod-subpath-test-downwardapi-hhxg" in namespace "subpath-5143" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:48:17.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5143" for this suite. • [SLOW TEST:24.676 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":294,"completed":172,"skipped":2822,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:48:17.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:48:24.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6857" for this suite. STEP: Destroying namespace "nsdeletetest-3236" for this suite. Jul 20 02:48:24.019: INFO: Namespace nsdeletetest-3236 was already deleted STEP: Destroying namespace "nsdeletetest-3595" for this suite. • [SLOW TEST:6.274 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":294,"completed":173,"skipped":2838,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:48:24.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:48:24.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-115" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":294,"completed":174,"skipped":2865,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:48:24.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-t4n4 STEP: Creating a pod to test atomic-volume-subpath Jul 20 02:48:24.585: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t4n4" in namespace "subpath-722" to be "Succeeded or Failed" Jul 20 02:48:24.615: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.038441ms Jul 20 02:48:26.619: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034303154s Jul 20 02:48:28.624: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038811522s Jul 20 02:48:30.628: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 6.043096574s Jul 20 02:48:32.632: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 8.046637905s Jul 20 02:48:34.636: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 10.050966685s Jul 20 02:48:36.645: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 12.059736501s Jul 20 02:48:38.649: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 14.064235747s Jul 20 02:48:40.654: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 16.068705446s Jul 20 02:48:42.658: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 18.073244589s Jul 20 02:48:44.663: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 20.077462035s Jul 20 02:48:46.682: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 22.096467264s Jul 20 02:48:48.694: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Running", Reason="", readiness=true. Elapsed: 24.108612524s Jul 20 02:48:50.698: INFO: Pod "pod-subpath-test-configmap-t4n4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.113262078s STEP: Saw pod success Jul 20 02:48:50.698: INFO: Pod "pod-subpath-test-configmap-t4n4" satisfied condition "Succeeded or Failed" Jul 20 02:48:50.702: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-t4n4 container test-container-subpath-configmap-t4n4: STEP: delete the pod Jul 20 02:48:50.740: INFO: Waiting for pod pod-subpath-test-configmap-t4n4 to disappear Jul 20 02:48:50.855: INFO: Pod pod-subpath-test-configmap-t4n4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-t4n4 Jul 20 02:48:50.855: INFO: Deleting pod "pod-subpath-test-configmap-t4n4" in namespace "subpath-722" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:48:50.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-722" for this suite. • [SLOW TEST:26.456 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":294,"completed":175,"skipped":2870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:48:50.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:48:51.204: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jul 20 02:48:54.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7647 create -f -' Jul 20 02:48:58.616: INFO: stderr: "" Jul 20 02:48:58.616: INFO: stdout: "e2e-test-crd-publish-openapi-3824-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 20 02:48:58.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7647 delete e2e-test-crd-publish-openapi-3824-crds test-foo' Jul 20 02:48:58.727: INFO: stderr: "" Jul 20 02:48:58.727: INFO: stdout: "e2e-test-crd-publish-openapi-3824-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jul 20 02:48:58.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7647 apply -f -' Jul 20 02:48:59.057: INFO: stderr: "" Jul 20 02:48:59.057: INFO: stdout: "e2e-test-crd-publish-openapi-3824-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 20 02:48:59.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7647 delete e2e-test-crd-publish-openapi-3824-crds test-foo' Jul 20 02:48:59.163: INFO: stderr: "" Jul 20 02:48:59.163: INFO: stdout: "e2e-test-crd-publish-openapi-3824-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jul 20 02:48:59.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7647 create -f -' Jul 20 02:48:59.435: INFO: rc: 1 Jul 20 02:48:59.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7647 apply -f -' Jul 20 02:48:59.667: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jul 20 02:48:59.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7647 create -f -' Jul 20 02:48:59.957: INFO: rc: 1 Jul 20 02:48:59.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7647 apply -f -' Jul 20 02:49:00.237: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jul 20 02:49:00.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3824-crds' Jul 20 02:49:00.528: INFO: stderr: "" Jul 20 02:49:00.528: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3824-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jul 20 02:49:00.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3824-crds.metadata' Jul 20 02:49:00.879: INFO: stderr: "" Jul 20 02:49:00.879: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3824-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jul 20 02:49:00.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3824-crds.spec' Jul 20 02:49:01.145: INFO: stderr: "" Jul 20 02:49:01.145: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3824-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jul 20 02:49:01.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3824-crds.spec.bars' Jul 20 02:49:01.411: INFO: stderr: "" Jul 20 02:49:01.411: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3824-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jul 20 02:49:01.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3824-crds.spec.bars2' Jul 20 02:49:01.733: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:49:04.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7647" for this suite. • [SLOW TEST:13.811 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":294,"completed":176,"skipped":2931,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:49:04.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jul 20 02:49:04.755: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jul 20 02:49:04.764: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jul 20 02:49:04.764: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jul 20 02:49:04.770: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jul 20 02:49:04.770: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jul 20 02:49:04.985: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jul 20 02:49:04.986: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jul 20 02:49:12.297: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:49:12.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-206" for this suite. • [SLOW TEST:7.703 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":294,"completed":177,"skipped":2938,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:49:12.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 02:49:12.953: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 02:49:15.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730810152, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730810152, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730810153, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730810152, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 02:49:17.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730810152, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730810152, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730810153, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730810152, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 02:49:20.562: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:49:20.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-468" for this suite. STEP: Destroying namespace "webhook-468-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.342 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":294,"completed":178,"skipped":2940,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:49:20.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 20 02:49:25.302: INFO: Successfully updated pod "labelsupdateb9d3a65c-0b35-43f3-98b2-d09a3d752d56" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:49:29.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-400" for this suite. • [SLOW TEST:8.635 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":179,"skipped":2986,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:49:29.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:49:29.499: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:49:31.544: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:49:33.503: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:35.521: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:37.503: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:39.533: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:41.502: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:43.503: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:45.502: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:47.503: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:49.503: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:51.503: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = false) Jul 20 02:49:53.503: INFO: The status of Pod test-webserver-e2b4d0b4-e1eb-4923-8b16-336396e0320e is Running (Ready = true) Jul 20 02:49:53.506: INFO: Container started at 2020-07-20 02:49:32 +0000 UTC, pod became ready at 2020-07-20 02:49:51 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:49:53.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5723" for this suite. • [SLOW TEST:24.154 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":294,"completed":180,"skipped":3001,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:49:53.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:50:01.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-309" for this suite. • [SLOW TEST:8.101 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":294,"completed":181,"skipped":3017,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:50:01.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:50:01.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config version' Jul 20 02:50:01.810: INFO: stderr: "" Jul 20 02:50:01.810: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.0.4+2d327ac4558d78\", GitCommit:\"2d327ac4558d78c744004db178dacb80bd6e0b9e\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T11:25:25Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.1\", GitCommit:\"2cbdfecbbd57dbd4e9f42d73a75fbbc6d9eadfd3\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:33:31Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:50:01.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-701" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":294,"completed":182,"skipped":3066,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:50:01.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:50:02.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9907" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":294,"completed":183,"skipped":3068,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:50:02.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 20 02:50:12.271: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:12.272: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:12.307649 8 log.go:181] (0xc007118790) (0xc0028e83c0) Create stream I0720 02:50:12.307687 8 log.go:181] (0xc007118790) (0xc0028e83c0) Stream added, broadcasting: 1 I0720 02:50:12.309972 8 log.go:181] (0xc007118790) Reply frame received for 1 I0720 02:50:12.310025 8 log.go:181] (0xc007118790) (0xc0021c5a40) Create stream I0720 02:50:12.310050 8 log.go:181] (0xc007118790) (0xc0021c5a40) Stream added, broadcasting: 3 I0720 02:50:12.311152 8 log.go:181] (0xc007118790) Reply frame received for 3 I0720 02:50:12.311198 8 log.go:181] (0xc007118790) (0xc002702f00) Create stream I0720 02:50:12.311217 8 log.go:181] (0xc007118790) (0xc002702f00) Stream added, broadcasting: 5 I0720 02:50:12.312070 8 log.go:181] (0xc007118790) Reply frame received for 5 I0720 02:50:12.366358 8 log.go:181] (0xc007118790) Data frame received for 5 I0720 02:50:12.366402 8 log.go:181] (0xc002702f00) (5) Data frame handling I0720 02:50:12.366430 8 log.go:181] (0xc007118790) Data frame received for 3 I0720 02:50:12.366446 8 log.go:181] (0xc0021c5a40) (3) Data frame handling I0720 02:50:12.366461 8 log.go:181] (0xc0021c5a40) (3) Data frame sent I0720 02:50:12.366474 8 log.go:181] (0xc007118790) Data frame received for 3 I0720 02:50:12.366486 8 log.go:181] (0xc0021c5a40) (3) Data frame handling I0720 02:50:12.368226 8 log.go:181] (0xc007118790) Data frame received for 1 I0720 02:50:12.368289 8 log.go:181] (0xc0028e83c0) (1) Data frame handling I0720 02:50:12.368336 8 log.go:181] (0xc0028e83c0) (1) Data frame sent I0720 02:50:12.368360 8 log.go:181] (0xc007118790) (0xc0028e83c0) Stream removed, broadcasting: 1 I0720 02:50:12.368380 8 log.go:181] (0xc007118790) Go away received I0720 02:50:12.368498 8 log.go:181] (0xc007118790) (0xc0028e83c0) Stream removed, broadcasting: 1 I0720 02:50:12.368540 8 log.go:181] (0xc007118790) (0xc0021c5a40) Stream removed, broadcasting: 3 I0720 02:50:12.368601 8 log.go:181] (0xc007118790) (0xc002702f00) Stream removed, broadcasting: 5 Jul 20 02:50:12.368: INFO: Exec stderr: "" Jul 20 02:50:12.368: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:12.368: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:12.404707 8 log.go:181] (0xc0039e2580) (0xc002b74820) Create stream I0720 02:50:12.404819 8 log.go:181] (0xc0039e2580) (0xc002b74820) Stream added, broadcasting: 1 I0720 02:50:12.406711 8 log.go:181] (0xc0039e2580) Reply frame received for 1 I0720 02:50:12.406764 8 log.go:181] (0xc0039e2580) (0xc002b748c0) Create stream I0720 02:50:12.406785 8 log.go:181] (0xc0039e2580) (0xc002b748c0) Stream added, broadcasting: 3 I0720 02:50:12.407690 8 log.go:181] (0xc0039e2580) Reply frame received for 3 I0720 02:50:12.407724 8 log.go:181] (0xc0039e2580) (0xc0028e8500) Create stream I0720 02:50:12.407744 8 log.go:181] (0xc0039e2580) (0xc0028e8500) Stream added, broadcasting: 5 I0720 02:50:12.408883 8 log.go:181] (0xc0039e2580) Reply frame received for 5 I0720 02:50:12.475732 8 log.go:181] (0xc0039e2580) Data frame received for 3 I0720 02:50:12.475759 8 log.go:181] (0xc002b748c0) (3) Data frame handling I0720 02:50:12.475778 8 log.go:181] (0xc002b748c0) (3) Data frame sent I0720 02:50:12.475976 8 log.go:181] (0xc0039e2580) Data frame received for 5 I0720 02:50:12.476003 8 log.go:181] (0xc0028e8500) (5) Data frame handling I0720 02:50:12.476139 8 log.go:181] (0xc0039e2580) Data frame received for 3 I0720 02:50:12.476158 8 log.go:181] (0xc002b748c0) (3) Data frame handling I0720 02:50:12.481532 8 log.go:181] (0xc0039e2580) Data frame received for 1 I0720 02:50:12.481560 8 log.go:181] (0xc002b74820) (1) Data frame handling I0720 02:50:12.481588 8 log.go:181] (0xc002b74820) (1) Data frame sent I0720 02:50:12.481603 8 log.go:181] (0xc0039e2580) (0xc002b74820) Stream removed, broadcasting: 1 I0720 02:50:12.481628 8 log.go:181] (0xc0039e2580) Go away received I0720 02:50:12.481693 8 log.go:181] (0xc0039e2580) (0xc002b74820) Stream removed, broadcasting: 1 I0720 02:50:12.481715 8 log.go:181] (0xc0039e2580) (0xc002b748c0) Stream removed, broadcasting: 3 I0720 02:50:12.481725 8 log.go:181] (0xc0039e2580) (0xc0028e8500) Stream removed, broadcasting: 5 Jul 20 02:50:12.481: INFO: Exec stderr: "" Jul 20 02:50:12.481: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:12.481: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:12.527334 8 log.go:181] (0xc004b2e370) (0xc0021c5f40) Create stream I0720 02:50:12.527371 8 log.go:181] (0xc004b2e370) (0xc0021c5f40) Stream added, broadcasting: 1 I0720 02:50:12.529196 8 log.go:181] (0xc004b2e370) Reply frame received for 1 I0720 02:50:12.529247 8 log.go:181] (0xc004b2e370) (0xc0028e85a0) Create stream I0720 02:50:12.529267 8 log.go:181] (0xc004b2e370) (0xc0028e85a0) Stream added, broadcasting: 3 I0720 02:50:12.530000 8 log.go:181] (0xc004b2e370) Reply frame received for 3 I0720 02:50:12.530022 8 log.go:181] (0xc004b2e370) (0xc002054280) Create stream I0720 02:50:12.530032 8 log.go:181] (0xc004b2e370) (0xc002054280) Stream added, broadcasting: 5 I0720 02:50:12.530796 8 log.go:181] (0xc004b2e370) Reply frame received for 5 I0720 02:50:12.602556 8 log.go:181] (0xc004b2e370) Data frame received for 5 I0720 02:50:12.602586 8 log.go:181] (0xc002054280) (5) Data frame handling I0720 02:50:12.602619 8 log.go:181] (0xc004b2e370) Data frame received for 3 I0720 02:50:12.602638 8 log.go:181] (0xc0028e85a0) (3) Data frame handling I0720 02:50:12.602654 8 log.go:181] (0xc0028e85a0) (3) Data frame sent I0720 02:50:12.602664 8 log.go:181] (0xc004b2e370) Data frame received for 3 I0720 02:50:12.602680 8 log.go:181] (0xc0028e85a0) (3) Data frame handling I0720 02:50:12.604145 8 log.go:181] (0xc004b2e370) Data frame received for 1 I0720 02:50:12.604188 8 log.go:181] (0xc0021c5f40) (1) Data frame handling I0720 02:50:12.604239 8 log.go:181] (0xc0021c5f40) (1) Data frame sent I0720 02:50:12.604282 8 log.go:181] (0xc004b2e370) (0xc0021c5f40) Stream removed, broadcasting: 1 I0720 02:50:12.604326 8 log.go:181] (0xc004b2e370) Go away received I0720 02:50:12.604381 8 log.go:181] (0xc004b2e370) (0xc0021c5f40) Stream removed, broadcasting: 1 I0720 02:50:12.604421 8 log.go:181] (0xc004b2e370) (0xc0028e85a0) Stream removed, broadcasting: 3 I0720 02:50:12.604454 8 log.go:181] (0xc004b2e370) (0xc002054280) Stream removed, broadcasting: 5 Jul 20 02:50:12.604: INFO: Exec stderr: "" Jul 20 02:50:12.604: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:12.604: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:12.637864 8 log.go:181] (0xc0039e28f0) (0xc002b74aa0) Create stream I0720 02:50:12.637894 8 log.go:181] (0xc0039e28f0) (0xc002b74aa0) Stream added, broadcasting: 1 I0720 02:50:12.639988 8 log.go:181] (0xc0039e28f0) Reply frame received for 1 I0720 02:50:12.640037 8 log.go:181] (0xc0039e28f0) (0xc002703180) Create stream I0720 02:50:12.640056 8 log.go:181] (0xc0039e28f0) (0xc002703180) Stream added, broadcasting: 3 I0720 02:50:12.641389 8 log.go:181] (0xc0039e28f0) Reply frame received for 3 I0720 02:50:12.641439 8 log.go:181] (0xc0039e28f0) (0xc0028e8640) Create stream I0720 02:50:12.641459 8 log.go:181] (0xc0039e28f0) (0xc0028e8640) Stream added, broadcasting: 5 I0720 02:50:12.642515 8 log.go:181] (0xc0039e28f0) Reply frame received for 5 I0720 02:50:12.705173 8 log.go:181] (0xc0039e28f0) Data frame received for 5 I0720 02:50:12.705225 8 log.go:181] (0xc0028e8640) (5) Data frame handling I0720 02:50:12.705254 8 log.go:181] (0xc0039e28f0) Data frame received for 3 I0720 02:50:12.705268 8 log.go:181] (0xc002703180) (3) Data frame handling I0720 02:50:12.705276 8 log.go:181] (0xc002703180) (3) Data frame sent I0720 02:50:12.705656 8 log.go:181] (0xc0039e28f0) Data frame received for 3 I0720 02:50:12.705687 8 log.go:181] (0xc002703180) (3) Data frame handling I0720 02:50:12.707169 8 log.go:181] (0xc0039e28f0) Data frame received for 1 I0720 02:50:12.707206 8 log.go:181] (0xc002b74aa0) (1) Data frame handling I0720 02:50:12.707235 8 log.go:181] (0xc002b74aa0) (1) Data frame sent I0720 02:50:12.707262 8 log.go:181] (0xc0039e28f0) (0xc002b74aa0) Stream removed, broadcasting: 1 I0720 02:50:12.707302 8 log.go:181] (0xc0039e28f0) Go away received I0720 02:50:12.707400 8 log.go:181] (0xc0039e28f0) (0xc002b74aa0) Stream removed, broadcasting: 1 I0720 02:50:12.707424 8 log.go:181] (0xc0039e28f0) (0xc002703180) Stream removed, broadcasting: 3 I0720 02:50:12.707443 8 log.go:181] (0xc0039e28f0) (0xc0028e8640) Stream removed, broadcasting: 5 Jul 20 02:50:12.707: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 20 02:50:12.707: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:12.707: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:12.745237 8 log.go:181] (0xc000d36b00) (0xc0027034a0) Create stream I0720 02:50:12.745273 8 log.go:181] (0xc000d36b00) (0xc0027034a0) Stream added, broadcasting: 1 I0720 02:50:12.747047 8 log.go:181] (0xc000d36b00) Reply frame received for 1 I0720 02:50:12.747088 8 log.go:181] (0xc000d36b00) (0xc002b74e60) Create stream I0720 02:50:12.747098 8 log.go:181] (0xc000d36b00) (0xc002b74e60) Stream added, broadcasting: 3 I0720 02:50:12.747827 8 log.go:181] (0xc000d36b00) Reply frame received for 3 I0720 02:50:12.747860 8 log.go:181] (0xc000d36b00) (0xc002b74fa0) Create stream I0720 02:50:12.747869 8 log.go:181] (0xc000d36b00) (0xc002b74fa0) Stream added, broadcasting: 5 I0720 02:50:12.748514 8 log.go:181] (0xc000d36b00) Reply frame received for 5 I0720 02:50:12.805895 8 log.go:181] (0xc000d36b00) Data frame received for 5 I0720 02:50:12.805950 8 log.go:181] (0xc002b74fa0) (5) Data frame handling I0720 02:50:12.805978 8 log.go:181] (0xc000d36b00) Data frame received for 3 I0720 02:50:12.805993 8 log.go:181] (0xc002b74e60) (3) Data frame handling I0720 02:50:12.806005 8 log.go:181] (0xc002b74e60) (3) Data frame sent I0720 02:50:12.806187 8 log.go:181] (0xc000d36b00) Data frame received for 3 I0720 02:50:12.806219 8 log.go:181] (0xc002b74e60) (3) Data frame handling I0720 02:50:12.808088 8 log.go:181] (0xc000d36b00) Data frame received for 1 I0720 02:50:12.808119 8 log.go:181] (0xc0027034a0) (1) Data frame handling I0720 02:50:12.808149 8 log.go:181] (0xc0027034a0) (1) Data frame sent I0720 02:50:12.808169 8 log.go:181] (0xc000d36b00) (0xc0027034a0) Stream removed, broadcasting: 1 I0720 02:50:12.808235 8 log.go:181] (0xc000d36b00) Go away received I0720 02:50:12.808301 8 log.go:181] (0xc000d36b00) (0xc0027034a0) Stream removed, broadcasting: 1 I0720 02:50:12.808338 8 log.go:181] (0xc000d36b00) (0xc002b74e60) Stream removed, broadcasting: 3 I0720 02:50:12.808351 8 log.go:181] (0xc000d36b00) (0xc002b74fa0) Stream removed, broadcasting: 5 Jul 20 02:50:12.808: INFO: Exec stderr: "" Jul 20 02:50:12.808: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:12.808: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:12.846437 8 log.go:181] (0xc000d37130) (0xc002703860) Create stream I0720 02:50:12.846511 8 log.go:181] (0xc000d37130) (0xc002703860) Stream added, broadcasting: 1 I0720 02:50:12.849561 8 log.go:181] (0xc000d37130) Reply frame received for 1 I0720 02:50:12.849627 8 log.go:181] (0xc000d37130) (0xc0026b2000) Create stream I0720 02:50:12.849640 8 log.go:181] (0xc000d37130) (0xc0026b2000) Stream added, broadcasting: 3 I0720 02:50:12.853590 8 log.go:181] (0xc000d37130) Reply frame received for 3 I0720 02:50:12.853664 8 log.go:181] (0xc000d37130) (0xc0026b20a0) Create stream I0720 02:50:12.853693 8 log.go:181] (0xc000d37130) (0xc0026b20a0) Stream added, broadcasting: 5 I0720 02:50:12.862488 8 log.go:181] (0xc000d37130) Reply frame received for 5 I0720 02:50:12.929488 8 log.go:181] (0xc000d37130) Data frame received for 5 I0720 02:50:12.929548 8 log.go:181] (0xc0026b20a0) (5) Data frame handling I0720 02:50:12.929589 8 log.go:181] (0xc000d37130) Data frame received for 3 I0720 02:50:12.929610 8 log.go:181] (0xc0026b2000) (3) Data frame handling I0720 02:50:12.929638 8 log.go:181] (0xc0026b2000) (3) Data frame sent I0720 02:50:12.929659 8 log.go:181] (0xc000d37130) Data frame received for 3 I0720 02:50:12.929679 8 log.go:181] (0xc0026b2000) (3) Data frame handling I0720 02:50:12.931936 8 log.go:181] (0xc000d37130) Data frame received for 1 I0720 02:50:12.931968 8 log.go:181] (0xc002703860) (1) Data frame handling I0720 02:50:12.931986 8 log.go:181] (0xc002703860) (1) Data frame sent I0720 02:50:12.932003 8 log.go:181] (0xc000d37130) (0xc002703860) Stream removed, broadcasting: 1 I0720 02:50:12.932123 8 log.go:181] (0xc000d37130) (0xc002703860) Stream removed, broadcasting: 1 I0720 02:50:12.932195 8 log.go:181] (0xc000d37130) (0xc0026b2000) Stream removed, broadcasting: 3 I0720 02:50:12.932249 8 log.go:181] (0xc000d37130) (0xc0026b20a0) Stream removed, broadcasting: 5 Jul 20 02:50:12.932: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 20 02:50:12.932: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:12.932: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:12.933227 8 log.go:181] (0xc000d37130) Go away received I0720 02:50:12.965658 8 log.go:181] (0xc000d37760) (0xc0026b2320) Create stream I0720 02:50:12.965727 8 log.go:181] (0xc000d37760) (0xc0026b2320) Stream added, broadcasting: 1 I0720 02:50:12.967404 8 log.go:181] (0xc000d37760) Reply frame received for 1 I0720 02:50:12.967430 8 log.go:181] (0xc000d37760) (0xc0026b2460) Create stream I0720 02:50:12.967441 8 log.go:181] (0xc000d37760) (0xc0026b2460) Stream added, broadcasting: 3 I0720 02:50:12.968127 8 log.go:181] (0xc000d37760) Reply frame received for 3 I0720 02:50:12.968152 8 log.go:181] (0xc000d37760) (0xc0028e86e0) Create stream I0720 02:50:12.968163 8 log.go:181] (0xc000d37760) (0xc0028e86e0) Stream added, broadcasting: 5 I0720 02:50:12.969225 8 log.go:181] (0xc000d37760) Reply frame received for 5 I0720 02:50:13.023491 8 log.go:181] (0xc000d37760) Data frame received for 3 I0720 02:50:13.023545 8 log.go:181] (0xc0026b2460) (3) Data frame handling I0720 02:50:13.023571 8 log.go:181] (0xc0026b2460) (3) Data frame sent I0720 02:50:13.023602 8 log.go:181] (0xc000d37760) Data frame received for 5 I0720 02:50:13.023649 8 log.go:181] (0xc0028e86e0) (5) Data frame handling I0720 02:50:13.023681 8 log.go:181] (0xc000d37760) Data frame received for 3 I0720 02:50:13.023728 8 log.go:181] (0xc0026b2460) (3) Data frame handling I0720 02:50:13.027129 8 log.go:181] (0xc000d37760) Data frame received for 1 I0720 02:50:13.027160 8 log.go:181] (0xc0026b2320) (1) Data frame handling I0720 02:50:13.027182 8 log.go:181] (0xc0026b2320) (1) Data frame sent I0720 02:50:13.027202 8 log.go:181] (0xc000d37760) (0xc0026b2320) Stream removed, broadcasting: 1 I0720 02:50:13.027221 8 log.go:181] (0xc000d37760) Go away received I0720 02:50:13.027381 8 log.go:181] (0xc000d37760) (0xc0026b2320) Stream removed, broadcasting: 1 I0720 02:50:13.027420 8 log.go:181] (0xc000d37760) (0xc0026b2460) Stream removed, broadcasting: 3 I0720 02:50:13.027432 8 log.go:181] (0xc000d37760) (0xc0028e86e0) Stream removed, broadcasting: 5 Jul 20 02:50:13.027: INFO: Exec stderr: "" Jul 20 02:50:13.027: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:13.027: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:13.060794 8 log.go:181] (0xc0069bc370) (0xc001cac6e0) Create stream I0720 02:50:13.060833 8 log.go:181] (0xc0069bc370) (0xc001cac6e0) Stream added, broadcasting: 1 I0720 02:50:13.062297 8 log.go:181] (0xc0069bc370) Reply frame received for 1 I0720 02:50:13.062331 8 log.go:181] (0xc0069bc370) (0xc002054320) Create stream I0720 02:50:13.062342 8 log.go:181] (0xc0069bc370) (0xc002054320) Stream added, broadcasting: 3 I0720 02:50:13.063161 8 log.go:181] (0xc0069bc370) Reply frame received for 3 I0720 02:50:13.063213 8 log.go:181] (0xc0069bc370) (0xc0026b25a0) Create stream I0720 02:50:13.063237 8 log.go:181] (0xc0069bc370) (0xc0026b25a0) Stream added, broadcasting: 5 I0720 02:50:13.063997 8 log.go:181] (0xc0069bc370) Reply frame received for 5 I0720 02:50:13.125272 8 log.go:181] (0xc0069bc370) Data frame received for 3 I0720 02:50:13.125303 8 log.go:181] (0xc002054320) (3) Data frame handling I0720 02:50:13.125324 8 log.go:181] (0xc002054320) (3) Data frame sent I0720 02:50:13.125350 8 log.go:181] (0xc0069bc370) Data frame received for 5 I0720 02:50:13.125365 8 log.go:181] (0xc0026b25a0) (5) Data frame handling I0720 02:50:13.125407 8 log.go:181] (0xc0069bc370) Data frame received for 3 I0720 02:50:13.125427 8 log.go:181] (0xc002054320) (3) Data frame handling I0720 02:50:13.126839 8 log.go:181] (0xc0069bc370) Data frame received for 1 I0720 02:50:13.126873 8 log.go:181] (0xc001cac6e0) (1) Data frame handling I0720 02:50:13.126892 8 log.go:181] (0xc001cac6e0) (1) Data frame sent I0720 02:50:13.126905 8 log.go:181] (0xc0069bc370) (0xc001cac6e0) Stream removed, broadcasting: 1 I0720 02:50:13.126918 8 log.go:181] (0xc0069bc370) Go away received I0720 02:50:13.127013 8 log.go:181] (0xc0069bc370) (0xc001cac6e0) Stream removed, broadcasting: 1 I0720 02:50:13.127040 8 log.go:181] (0xc0069bc370) (0xc002054320) Stream removed, broadcasting: 3 I0720 02:50:13.127061 8 log.go:181] (0xc0069bc370) (0xc0026b25a0) Stream removed, broadcasting: 5 Jul 20 02:50:13.127: INFO: Exec stderr: "" Jul 20 02:50:13.127: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:13.127: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:13.167231 8 log.go:181] (0xc007118dc0) (0xc0028e8a00) Create stream I0720 02:50:13.167274 8 log.go:181] (0xc007118dc0) (0xc0028e8a00) Stream added, broadcasting: 1 I0720 02:50:13.169221 8 log.go:181] (0xc007118dc0) Reply frame received for 1 I0720 02:50:13.169266 8 log.go:181] (0xc007118dc0) (0xc002b75040) Create stream I0720 02:50:13.169281 8 log.go:181] (0xc007118dc0) (0xc002b75040) Stream added, broadcasting: 3 I0720 02:50:13.170407 8 log.go:181] (0xc007118dc0) Reply frame received for 3 I0720 02:50:13.170463 8 log.go:181] (0xc007118dc0) (0xc002b750e0) Create stream I0720 02:50:13.170478 8 log.go:181] (0xc007118dc0) (0xc002b750e0) Stream added, broadcasting: 5 I0720 02:50:13.171579 8 log.go:181] (0xc007118dc0) Reply frame received for 5 I0720 02:50:13.243019 8 log.go:181] (0xc007118dc0) Data frame received for 5 I0720 02:50:13.243063 8 log.go:181] (0xc002b750e0) (5) Data frame handling I0720 02:50:13.243086 8 log.go:181] (0xc007118dc0) Data frame received for 3 I0720 02:50:13.243100 8 log.go:181] (0xc002b75040) (3) Data frame handling I0720 02:50:13.243114 8 log.go:181] (0xc002b75040) (3) Data frame sent I0720 02:50:13.243138 8 log.go:181] (0xc007118dc0) Data frame received for 3 I0720 02:50:13.243152 8 log.go:181] (0xc002b75040) (3) Data frame handling I0720 02:50:13.244352 8 log.go:181] (0xc007118dc0) Data frame received for 1 I0720 02:50:13.244370 8 log.go:181] (0xc0028e8a00) (1) Data frame handling I0720 02:50:13.244393 8 log.go:181] (0xc0028e8a00) (1) Data frame sent I0720 02:50:13.244440 8 log.go:181] (0xc007118dc0) (0xc0028e8a00) Stream removed, broadcasting: 1 I0720 02:50:13.244516 8 log.go:181] (0xc007118dc0) (0xc0028e8a00) Stream removed, broadcasting: 1 I0720 02:50:13.244530 8 log.go:181] (0xc007118dc0) (0xc002b75040) Stream removed, broadcasting: 3 I0720 02:50:13.244616 8 log.go:181] (0xc007118dc0) Go away received I0720 02:50:13.244662 8 log.go:181] (0xc007118dc0) (0xc002b750e0) Stream removed, broadcasting: 5 Jul 20 02:50:13.244: INFO: Exec stderr: "" Jul 20 02:50:13.244: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7153 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:13.244: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:13.268118 8 log.go:181] (0xc0071193f0) (0xc0028e8c80) Create stream I0720 02:50:13.268142 8 log.go:181] (0xc0071193f0) (0xc0028e8c80) Stream added, broadcasting: 1 I0720 02:50:13.270014 8 log.go:181] (0xc0071193f0) Reply frame received for 1 I0720 02:50:13.270046 8 log.go:181] (0xc0071193f0) (0xc002054460) Create stream I0720 02:50:13.270056 8 log.go:181] (0xc0071193f0) (0xc002054460) Stream added, broadcasting: 3 I0720 02:50:13.270925 8 log.go:181] (0xc0071193f0) Reply frame received for 3 I0720 02:50:13.270970 8 log.go:181] (0xc0071193f0) (0xc001cac780) Create stream I0720 02:50:13.270984 8 log.go:181] (0xc0071193f0) (0xc001cac780) Stream added, broadcasting: 5 I0720 02:50:13.271847 8 log.go:181] (0xc0071193f0) Reply frame received for 5 I0720 02:50:13.341397 8 log.go:181] (0xc0071193f0) Data frame received for 3 I0720 02:50:13.341439 8 log.go:181] (0xc002054460) (3) Data frame handling I0720 02:50:13.341457 8 log.go:181] (0xc002054460) (3) Data frame sent I0720 02:50:13.341470 8 log.go:181] (0xc0071193f0) Data frame received for 3 I0720 02:50:13.341478 8 log.go:181] (0xc002054460) (3) Data frame handling I0720 02:50:13.341514 8 log.go:181] (0xc0071193f0) Data frame received for 5 I0720 02:50:13.341544 8 log.go:181] (0xc001cac780) (5) Data frame handling I0720 02:50:13.342702 8 log.go:181] (0xc0071193f0) Data frame received for 1 I0720 02:50:13.342759 8 log.go:181] (0xc0028e8c80) (1) Data frame handling I0720 02:50:13.342818 8 log.go:181] (0xc0028e8c80) (1) Data frame sent I0720 02:50:13.342864 8 log.go:181] (0xc0071193f0) (0xc0028e8c80) Stream removed, broadcasting: 1 I0720 02:50:13.342914 8 log.go:181] (0xc0071193f0) Go away received I0720 02:50:13.342997 8 log.go:181] (0xc0071193f0) (0xc0028e8c80) Stream removed, broadcasting: 1 I0720 02:50:13.343088 8 log.go:181] (0xc0071193f0) (0xc002054460) Stream removed, broadcasting: 3 I0720 02:50:13.343155 8 log.go:181] (0xc0071193f0) (0xc001cac780) Stream removed, broadcasting: 5 Jul 20 02:50:13.343: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:50:13.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7153" for this suite. • [SLOW TEST:11.219 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":184,"skipped":3112,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:50:13.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8301 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 20 02:50:13.411: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 20 02:50:13.494: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:50:15.497: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:50:17.498: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:50:19.498: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:50:21.498: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:50:23.498: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:50:25.498: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:50:27.498: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:50:29.498: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:50:31.498: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:50:33.498: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 20 02:50:33.504: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 20 02:50:35.509: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 20 02:50:37.510: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 20 02:50:41.537: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.32:8080/dial?request=hostname&protocol=http&host=10.244.1.31&port=8080&tries=1'] Namespace:pod-network-test-8301 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:41.537: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:41.577317 8 log.go:181] (0xc000d37ce0) (0xc0026b3ae0) Create stream I0720 02:50:41.577355 8 log.go:181] (0xc000d37ce0) (0xc0026b3ae0) Stream added, broadcasting: 1 I0720 02:50:41.579197 8 log.go:181] (0xc000d37ce0) Reply frame received for 1 I0720 02:50:41.579240 8 log.go:181] (0xc000d37ce0) (0xc0026b3b80) Create stream I0720 02:50:41.579258 8 log.go:181] (0xc000d37ce0) (0xc0026b3b80) Stream added, broadcasting: 3 I0720 02:50:41.579998 8 log.go:181] (0xc000d37ce0) Reply frame received for 3 I0720 02:50:41.580033 8 log.go:181] (0xc000d37ce0) (0xc00079eb40) Create stream I0720 02:50:41.580045 8 log.go:181] (0xc000d37ce0) (0xc00079eb40) Stream added, broadcasting: 5 I0720 02:50:41.581147 8 log.go:181] (0xc000d37ce0) Reply frame received for 5 I0720 02:50:41.700144 8 log.go:181] (0xc000d37ce0) Data frame received for 3 I0720 02:50:41.700172 8 log.go:181] (0xc0026b3b80) (3) Data frame handling I0720 02:50:41.700186 8 log.go:181] (0xc0026b3b80) (3) Data frame sent I0720 02:50:41.700691 8 log.go:181] (0xc000d37ce0) Data frame received for 5 I0720 02:50:41.700712 8 log.go:181] (0xc00079eb40) (5) Data frame handling I0720 02:50:41.700810 8 log.go:181] (0xc000d37ce0) Data frame received for 3 I0720 02:50:41.700835 8 log.go:181] (0xc0026b3b80) (3) Data frame handling I0720 02:50:41.703172 8 log.go:181] (0xc000d37ce0) Data frame received for 1 I0720 02:50:41.703202 8 log.go:181] (0xc0026b3ae0) (1) Data frame handling I0720 02:50:41.703211 8 log.go:181] (0xc0026b3ae0) (1) Data frame sent I0720 02:50:41.703222 8 log.go:181] (0xc000d37ce0) (0xc0026b3ae0) Stream removed, broadcasting: 1 I0720 02:50:41.703237 8 log.go:181] (0xc000d37ce0) Go away received I0720 02:50:41.703403 8 log.go:181] (0xc000d37ce0) (0xc0026b3ae0) Stream removed, broadcasting: 1 I0720 02:50:41.703434 8 log.go:181] (0xc000d37ce0) (0xc0026b3b80) Stream removed, broadcasting: 3 I0720 02:50:41.703453 8 log.go:181] (0xc000d37ce0) (0xc00079eb40) Stream removed, broadcasting: 5 Jul 20 02:50:41.703: INFO: Waiting for responses: map[] Jul 20 02:50:41.706: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.32:8080/dial?request=hostname&protocol=http&host=10.244.2.36&port=8080&tries=1'] Namespace:pod-network-test-8301 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:50:41.706: INFO: >>> kubeConfig: /root/.kube/config I0720 02:50:41.730762 8 log.go:181] (0xc0069bca50) (0xc001cad2c0) Create stream I0720 02:50:41.730806 8 log.go:181] (0xc0069bca50) (0xc001cad2c0) Stream added, broadcasting: 1 I0720 02:50:41.733258 8 log.go:181] (0xc0069bca50) Reply frame received for 1 I0720 02:50:41.733319 8 log.go:181] (0xc0069bca50) (0xc002b75180) Create stream I0720 02:50:41.733338 8 log.go:181] (0xc0069bca50) (0xc002b75180) Stream added, broadcasting: 3 I0720 02:50:41.734382 8 log.go:181] (0xc0069bca50) Reply frame received for 3 I0720 02:50:41.734427 8 log.go:181] (0xc0069bca50) (0xc002b75220) Create stream I0720 02:50:41.734447 8 log.go:181] (0xc0069bca50) (0xc002b75220) Stream added, broadcasting: 5 I0720 02:50:41.735451 8 log.go:181] (0xc0069bca50) Reply frame received for 5 I0720 02:50:41.805079 8 log.go:181] (0xc0069bca50) Data frame received for 3 I0720 02:50:41.805103 8 log.go:181] (0xc002b75180) (3) Data frame handling I0720 02:50:41.805125 8 log.go:181] (0xc002b75180) (3) Data frame sent I0720 02:50:41.806186 8 log.go:181] (0xc0069bca50) Data frame received for 3 I0720 02:50:41.806226 8 log.go:181] (0xc002b75180) (3) Data frame handling I0720 02:50:41.806812 8 log.go:181] (0xc0069bca50) Data frame received for 5 I0720 02:50:41.806848 8 log.go:181] (0xc002b75220) (5) Data frame handling I0720 02:50:41.807944 8 log.go:181] (0xc0069bca50) Data frame received for 1 I0720 02:50:41.807959 8 log.go:181] (0xc001cad2c0) (1) Data frame handling I0720 02:50:41.807967 8 log.go:181] (0xc001cad2c0) (1) Data frame sent I0720 02:50:41.808189 8 log.go:181] (0xc0069bca50) (0xc001cad2c0) Stream removed, broadcasting: 1 I0720 02:50:41.808321 8 log.go:181] (0xc0069bca50) (0xc001cad2c0) Stream removed, broadcasting: 1 I0720 02:50:41.808340 8 log.go:181] (0xc0069bca50) (0xc002b75180) Stream removed, broadcasting: 3 I0720 02:50:41.808358 8 log.go:181] (0xc0069bca50) (0xc002b75220) Stream removed, broadcasting: 5 Jul 20 02:50:41.808: INFO: Waiting for responses: map[] I0720 02:50:41.808441 8 log.go:181] (0xc0069bca50) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:50:41.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8301" for this suite. • [SLOW TEST:28.465 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":294,"completed":185,"skipped":3123,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:50:41.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 20 02:50:41.933: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6999 /api/v1/namespaces/watch-6999/configmaps/e2e-watch-test-resource-version 0746750d-b890-49bd-8f45-2e72e803974a 105784 0 2020-07-20 02:50:41 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-07-20 02:50:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 02:50:41.933: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6999 /api/v1/namespaces/watch-6999/configmaps/e2e-watch-test-resource-version 0746750d-b890-49bd-8f45-2e72e803974a 105785 0 2020-07-20 02:50:41 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-07-20 02:50:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:50:41.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6999" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":294,"completed":186,"skipped":3137,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:50:41.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1797 STEP: creating service affinity-nodeport in namespace services-1797 STEP: creating replication controller affinity-nodeport in namespace services-1797 I0720 02:50:42.188172 8 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-1797, replica count: 3 I0720 02:50:45.238599 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 02:50:48.238820 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 02:50:48.345: INFO: Creating new exec pod Jul 20 02:50:55.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1797 execpod-affinitygdlrn -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jul 20 02:50:56.084: INFO: stderr: "I0720 02:50:56.013112 2508 log.go:181] (0xc000b45b80) (0xc000b3c820) Create stream\nI0720 02:50:56.013180 2508 log.go:181] (0xc000b45b80) (0xc000b3c820) Stream added, broadcasting: 1\nI0720 02:50:56.017760 2508 log.go:181] (0xc000b45b80) Reply frame received for 1\nI0720 02:50:56.017808 2508 log.go:181] (0xc000b45b80) (0xc000568be0) Create stream\nI0720 02:50:56.017827 2508 log.go:181] (0xc000b45b80) (0xc000568be0) Stream added, broadcasting: 3\nI0720 02:50:56.018634 2508 log.go:181] (0xc000b45b80) Reply frame received for 3\nI0720 02:50:56.018660 2508 log.go:181] (0xc000b45b80) (0xc000569ea0) Create stream\nI0720 02:50:56.018696 2508 log.go:181] (0xc000b45b80) (0xc000569ea0) Stream added, broadcasting: 5\nI0720 02:50:56.019337 2508 log.go:181] (0xc000b45b80) Reply frame received for 5\nI0720 02:50:56.075252 2508 log.go:181] (0xc000b45b80) Data frame received for 5\nI0720 02:50:56.075372 2508 log.go:181] (0xc000569ea0) (5) Data frame handling\nI0720 02:50:56.075425 2508 log.go:181] (0xc000569ea0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0720 02:50:56.075750 2508 log.go:181] (0xc000b45b80) Data frame received for 5\nI0720 02:50:56.075780 2508 log.go:181] (0xc000569ea0) (5) Data frame handling\nI0720 02:50:56.075802 2508 log.go:181] (0xc000569ea0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0720 02:50:56.076495 2508 log.go:181] (0xc000b45b80) Data frame received for 5\nI0720 02:50:56.076524 2508 log.go:181] (0xc000569ea0) (5) Data frame handling\nI0720 02:50:56.076571 2508 log.go:181] (0xc000b45b80) Data frame received for 3\nI0720 02:50:56.076601 2508 log.go:181] (0xc000568be0) (3) Data frame handling\nI0720 02:50:56.078478 2508 log.go:181] (0xc000b45b80) Data frame received for 1\nI0720 02:50:56.078500 2508 log.go:181] (0xc000b3c820) (1) Data frame handling\nI0720 02:50:56.078548 2508 log.go:181] (0xc000b3c820) (1) Data frame sent\nI0720 02:50:56.078571 2508 log.go:181] (0xc000b45b80) (0xc000b3c820) Stream removed, broadcasting: 1\nI0720 02:50:56.078590 2508 log.go:181] (0xc000b45b80) Go away received\nI0720 02:50:56.078947 2508 log.go:181] (0xc000b45b80) (0xc000b3c820) Stream removed, broadcasting: 1\nI0720 02:50:56.078968 2508 log.go:181] (0xc000b45b80) (0xc000568be0) Stream removed, broadcasting: 3\nI0720 02:50:56.078983 2508 log.go:181] (0xc000b45b80) (0xc000569ea0) Stream removed, broadcasting: 5\n" Jul 20 02:50:56.084: INFO: stdout: "" Jul 20 02:50:56.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1797 execpod-affinitygdlrn -- /bin/sh -x -c nc -zv -t -w 2 10.98.89.246 80' Jul 20 02:50:56.466: INFO: stderr: "I0720 02:50:56.394310 2526 log.go:181] (0xc00020cfd0) (0xc000b38aa0) Create stream\nI0720 02:50:56.394370 2526 log.go:181] (0xc00020cfd0) (0xc000b38aa0) Stream added, broadcasting: 1\nI0720 02:50:56.396981 2526 log.go:181] (0xc00020cfd0) Reply frame received for 1\nI0720 02:50:56.397053 2526 log.go:181] (0xc00020cfd0) (0xc000619f40) Create stream\nI0720 02:50:56.397070 2526 log.go:181] (0xc00020cfd0) (0xc000619f40) Stream added, broadcasting: 3\nI0720 02:50:56.397995 2526 log.go:181] (0xc00020cfd0) Reply frame received for 3\nI0720 02:50:56.398036 2526 log.go:181] (0xc00020cfd0) (0xc000412780) Create stream\nI0720 02:50:56.398047 2526 log.go:181] (0xc00020cfd0) (0xc000412780) Stream added, broadcasting: 5\nI0720 02:50:56.398784 2526 log.go:181] (0xc00020cfd0) Reply frame received for 5\nI0720 02:50:56.461710 2526 log.go:181] (0xc00020cfd0) Data frame received for 3\nI0720 02:50:56.461754 2526 log.go:181] (0xc000619f40) (3) Data frame handling\nI0720 02:50:56.461773 2526 log.go:181] (0xc00020cfd0) Data frame received for 5\nI0720 02:50:56.461779 2526 log.go:181] (0xc000412780) (5) Data frame handling\nI0720 02:50:56.461786 2526 log.go:181] (0xc000412780) (5) Data frame sent\nI0720 02:50:56.461791 2526 log.go:181] (0xc00020cfd0) Data frame received for 5\nI0720 02:50:56.461796 2526 log.go:181] (0xc000412780) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.89.246 80\nConnection to 10.98.89.246 80 port [tcp/http] succeeded!\nI0720 02:50:56.463027 2526 log.go:181] (0xc00020cfd0) Data frame received for 1\nI0720 02:50:56.463048 2526 log.go:181] (0xc000b38aa0) (1) Data frame handling\nI0720 02:50:56.463070 2526 log.go:181] (0xc000b38aa0) (1) Data frame sent\nI0720 02:50:56.463088 2526 log.go:181] (0xc00020cfd0) (0xc000b38aa0) Stream removed, broadcasting: 1\nI0720 02:50:56.463153 2526 log.go:181] (0xc00020cfd0) Go away received\nI0720 02:50:56.463432 2526 log.go:181] (0xc00020cfd0) (0xc000b38aa0) Stream removed, broadcasting: 1\nI0720 02:50:56.463447 2526 log.go:181] (0xc00020cfd0) (0xc000619f40) Stream removed, broadcasting: 3\nI0720 02:50:56.463453 2526 log.go:181] (0xc00020cfd0) (0xc000412780) Stream removed, broadcasting: 5\n" Jul 20 02:50:56.467: INFO: stdout: "" Jul 20 02:50:56.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1797 execpod-affinitygdlrn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30467' Jul 20 02:50:56.697: INFO: stderr: "I0720 02:50:56.613031 2542 log.go:181] (0xc00062eb00) (0xc000923e00) Create stream\nI0720 02:50:56.613113 2542 log.go:181] (0xc00062eb00) (0xc000923e00) Stream added, broadcasting: 1\nI0720 02:50:56.614896 2542 log.go:181] (0xc00062eb00) Reply frame received for 1\nI0720 02:50:56.614950 2542 log.go:181] (0xc00062eb00) (0xc0007512c0) Create stream\nI0720 02:50:56.614985 2542 log.go:181] (0xc00062eb00) (0xc0007512c0) Stream added, broadcasting: 3\nI0720 02:50:56.615943 2542 log.go:181] (0xc00062eb00) Reply frame received for 3\nI0720 02:50:56.615984 2542 log.go:181] (0xc00062eb00) (0xc000751c20) Create stream\nI0720 02:50:56.616006 2542 log.go:181] (0xc00062eb00) (0xc000751c20) Stream added, broadcasting: 5\nI0720 02:50:56.616893 2542 log.go:181] (0xc00062eb00) Reply frame received for 5\nI0720 02:50:56.689602 2542 log.go:181] (0xc00062eb00) Data frame received for 5\nI0720 02:50:56.689645 2542 log.go:181] (0xc000751c20) (5) Data frame handling\nI0720 02:50:56.689666 2542 log.go:181] (0xc000751c20) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 30467\nI0720 02:50:56.689950 2542 log.go:181] (0xc00062eb00) Data frame received for 5\nI0720 02:50:56.689978 2542 log.go:181] (0xc000751c20) (5) Data frame handling\nI0720 02:50:56.689999 2542 log.go:181] (0xc000751c20) (5) Data frame sent\nConnection to 172.18.0.14 30467 port [tcp/30467] succeeded!\nI0720 02:50:56.690370 2542 log.go:181] (0xc00062eb00) Data frame received for 3\nI0720 02:50:56.690397 2542 log.go:181] (0xc0007512c0) (3) Data frame handling\nI0720 02:50:56.690418 2542 log.go:181] (0xc00062eb00) Data frame received for 5\nI0720 02:50:56.690435 2542 log.go:181] (0xc000751c20) (5) Data frame handling\nI0720 02:50:56.691731 2542 log.go:181] (0xc00062eb00) Data frame received for 1\nI0720 02:50:56.691753 2542 log.go:181] (0xc000923e00) (1) Data frame handling\nI0720 02:50:56.691767 2542 log.go:181] (0xc000923e00) (1) Data frame sent\nI0720 02:50:56.691785 2542 log.go:181] (0xc00062eb00) (0xc000923e00) Stream removed, broadcasting: 1\nI0720 02:50:56.691807 2542 log.go:181] (0xc00062eb00) Go away received\nI0720 02:50:56.692175 2542 log.go:181] (0xc00062eb00) (0xc000923e00) Stream removed, broadcasting: 1\nI0720 02:50:56.692203 2542 log.go:181] (0xc00062eb00) (0xc0007512c0) Stream removed, broadcasting: 3\nI0720 02:50:56.692212 2542 log.go:181] (0xc00062eb00) (0xc000751c20) Stream removed, broadcasting: 5\n" Jul 20 02:50:56.697: INFO: stdout: "" Jul 20 02:50:56.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1797 execpod-affinitygdlrn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30467' Jul 20 02:50:56.906: INFO: stderr: "I0720 02:50:56.829895 2560 log.go:181] (0xc0008c4f20) (0xc000171ea0) Create stream\nI0720 02:50:56.829937 2560 log.go:181] (0xc0008c4f20) (0xc000171ea0) Stream added, broadcasting: 1\nI0720 02:50:56.833060 2560 log.go:181] (0xc0008c4f20) Reply frame received for 1\nI0720 02:50:56.833118 2560 log.go:181] (0xc0008c4f20) (0xc0003305a0) Create stream\nI0720 02:50:56.833135 2560 log.go:181] (0xc0008c4f20) (0xc0003305a0) Stream added, broadcasting: 3\nI0720 02:50:56.835012 2560 log.go:181] (0xc0008c4f20) Reply frame received for 3\nI0720 02:50:56.835045 2560 log.go:181] (0xc0008c4f20) (0xc000330e60) Create stream\nI0720 02:50:56.835054 2560 log.go:181] (0xc0008c4f20) (0xc000330e60) Stream added, broadcasting: 5\nI0720 02:50:56.835887 2560 log.go:181] (0xc0008c4f20) Reply frame received for 5\nI0720 02:50:56.899619 2560 log.go:181] (0xc0008c4f20) Data frame received for 5\nI0720 02:50:56.899664 2560 log.go:181] (0xc000330e60) (5) Data frame handling\nI0720 02:50:56.899682 2560 log.go:181] (0xc000330e60) (5) Data frame sent\nI0720 02:50:56.899692 2560 log.go:181] (0xc0008c4f20) Data frame received for 5\nI0720 02:50:56.899701 2560 log.go:181] (0xc000330e60) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 30467\nConnection to 172.18.0.12 30467 port [tcp/30467] succeeded!\nI0720 02:50:56.899729 2560 log.go:181] (0xc0008c4f20) Data frame received for 3\nI0720 02:50:56.899740 2560 log.go:181] (0xc0003305a0) (3) Data frame handling\nI0720 02:50:56.900889 2560 log.go:181] (0xc0008c4f20) Data frame received for 1\nI0720 02:50:56.900919 2560 log.go:181] (0xc000171ea0) (1) Data frame handling\nI0720 02:50:56.900931 2560 log.go:181] (0xc000171ea0) (1) Data frame sent\nI0720 02:50:56.900943 2560 log.go:181] (0xc0008c4f20) (0xc000171ea0) Stream removed, broadcasting: 1\nI0720 02:50:56.900958 2560 log.go:181] (0xc0008c4f20) Go away received\nI0720 02:50:56.901322 2560 log.go:181] (0xc0008c4f20) (0xc000171ea0) Stream removed, broadcasting: 1\nI0720 02:50:56.901337 2560 log.go:181] (0xc0008c4f20) (0xc0003305a0) Stream removed, broadcasting: 3\nI0720 02:50:56.901344 2560 log.go:181] (0xc0008c4f20) (0xc000330e60) Stream removed, broadcasting: 5\n" Jul 20 02:50:56.906: INFO: stdout: "" Jul 20 02:50:56.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1797 execpod-affinitygdlrn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:30467/ ; done' Jul 20 02:50:57.168: INFO: stderr: "I0720 02:50:57.027526 2578 log.go:181] (0xc0008b0b00) (0xc0009c6aa0) Create stream\nI0720 02:50:57.027574 2578 log.go:181] (0xc0008b0b00) (0xc0009c6aa0) Stream added, broadcasting: 1\nI0720 02:50:57.029297 2578 log.go:181] (0xc0008b0b00) Reply frame received for 1\nI0720 02:50:57.029330 2578 log.go:181] (0xc0008b0b00) (0xc000884640) Create stream\nI0720 02:50:57.029348 2578 log.go:181] (0xc0008b0b00) (0xc000884640) Stream added, broadcasting: 3\nI0720 02:50:57.030185 2578 log.go:181] (0xc0008b0b00) Reply frame received for 3\nI0720 02:50:57.030222 2578 log.go:181] (0xc0008b0b00) (0xc000866280) Create stream\nI0720 02:50:57.030233 2578 log.go:181] (0xc0008b0b00) (0xc000866280) Stream added, broadcasting: 5\nI0720 02:50:57.031040 2578 log.go:181] (0xc0008b0b00) Reply frame received for 5\nI0720 02:50:57.075469 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.075502 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.075511 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.075531 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.075538 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.075545 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.079386 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.079404 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.079418 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.080046 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.080094 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.080114 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.080144 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.080161 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.080184 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.085933 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.085954 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.085968 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.086567 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.086589 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.086608 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.086637 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.086665 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.086688 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.091790 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.091820 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.091844 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.092502 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.092529 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.092563 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.092586 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.092604 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.092614 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.097247 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.097264 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.097272 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.097767 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.097783 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.097794 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\nI0720 02:50:57.097802 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.097880 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.097914 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.097927 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.097943 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.097953 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.104540 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.104565 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.104580 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.105482 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.105523 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.105550 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.105574 2578 log.go:181] (0xc000866280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.105631 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.105665 2578 log.go:181] (0xc000866280) (5) Data frame sent\nI0720 02:50:57.109868 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.109885 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.109903 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.110724 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.110750 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.110767 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.110819 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.110839 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.110861 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.114690 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.114707 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.114714 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.115403 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.115430 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.115452 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.115476 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.115485 2578 log.go:181] (0xc000866280) (5) Data frame sent\nI0720 02:50:57.115497 2578 log.go:181] (0xc000884640) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.121725 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.121752 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.121788 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.122173 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.122201 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.122218 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.122239 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.122250 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.122264 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.126576 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.126601 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.126612 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.132706 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.132812 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.132825 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\nI0720 02:50:57.132907 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.132939 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.132952 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.132970 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.132982 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.132995 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.133901 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.133959 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.133984 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.134288 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.134461 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.134491 2578 log.go:181] (0xc000866280) (5) Data frame sent\nI0720 02:50:57.134555 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.134570 2578 log.go:181] (0xc000866280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.134589 2578 log.go:181] (0xc000866280) (5) Data frame sent\nI0720 02:50:57.134711 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.134728 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.134757 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.138021 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.138037 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.138050 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.138575 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.138585 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.138595 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.138600 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.138604 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.138608 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.142112 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.142129 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.142143 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.142490 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.142505 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.142515 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.142527 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.142534 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.142541 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.146362 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.146379 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.146392 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.146783 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.146800 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.146807 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.146817 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.146822 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.146828 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.150337 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.150350 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.150356 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.150968 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.150983 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.150990 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.150997 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.151003 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.151009 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.155048 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.155060 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.155070 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.155614 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.155638 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.155660 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.155680 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.155693 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.155705 2578 log.go:181] (0xc000866280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30467/\nI0720 02:50:57.160395 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.160412 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.160431 2578 log.go:181] (0xc000884640) (3) Data frame sent\nI0720 02:50:57.161182 2578 log.go:181] (0xc0008b0b00) Data frame received for 3\nI0720 02:50:57.161201 2578 log.go:181] (0xc000884640) (3) Data frame handling\nI0720 02:50:57.161315 2578 log.go:181] (0xc0008b0b00) Data frame received for 5\nI0720 02:50:57.161341 2578 log.go:181] (0xc000866280) (5) Data frame handling\nI0720 02:50:57.163120 2578 log.go:181] (0xc0008b0b00) Data frame received for 1\nI0720 02:50:57.163141 2578 log.go:181] (0xc0009c6aa0) (1) Data frame handling\nI0720 02:50:57.163159 2578 log.go:181] (0xc0009c6aa0) (1) Data frame sent\nI0720 02:50:57.163484 2578 log.go:181] (0xc0008b0b00) (0xc0009c6aa0) Stream removed, broadcasting: 1\nI0720 02:50:57.163831 2578 log.go:181] (0xc0008b0b00) (0xc0009c6aa0) Stream removed, broadcasting: 1\nI0720 02:50:57.163848 2578 log.go:181] (0xc0008b0b00) (0xc000884640) Stream removed, broadcasting: 3\nI0720 02:50:57.163856 2578 log.go:181] (0xc0008b0b00) (0xc000866280) Stream removed, broadcasting: 5\n" Jul 20 02:50:57.169: INFO: stdout: "\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws\naffinity-nodeport-nc5ws" Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Received response from host: affinity-nodeport-nc5ws Jul 20 02:50:57.169: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-1797, will wait for the garbage collector to delete the pods Jul 20 02:50:57.543: INFO: Deleting ReplicationController affinity-nodeport took: 24.707309ms Jul 20 02:50:58.043: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.276026ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:51:14.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1797" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:32.749 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":187,"skipped":3146,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:51:14.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:51:45.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2883" for this suite. • [SLOW TEST:30.982 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":294,"completed":188,"skipped":3157,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:51:45.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 02:51:45.817: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d152bf2-0888-46dc-9c4d-986360579fba" in namespace "downward-api-5971" to be "Succeeded or Failed" Jul 20 02:51:45.841: INFO: Pod "downwardapi-volume-9d152bf2-0888-46dc-9c4d-986360579fba": Phase="Pending", Reason="", readiness=false. Elapsed: 23.446145ms Jul 20 02:51:47.896: INFO: Pod "downwardapi-volume-9d152bf2-0888-46dc-9c4d-986360579fba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078478955s Jul 20 02:51:49.900: INFO: Pod "downwardapi-volume-9d152bf2-0888-46dc-9c4d-986360579fba": Phase="Running", Reason="", readiness=true. Elapsed: 4.082634457s Jul 20 02:51:51.904: INFO: Pod "downwardapi-volume-9d152bf2-0888-46dc-9c4d-986360579fba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086347629s STEP: Saw pod success Jul 20 02:51:51.904: INFO: Pod "downwardapi-volume-9d152bf2-0888-46dc-9c4d-986360579fba" satisfied condition "Succeeded or Failed" Jul 20 02:51:51.923: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9d152bf2-0888-46dc-9c4d-986360579fba container client-container: STEP: delete the pod Jul 20 02:51:51.974: INFO: Waiting for pod downwardapi-volume-9d152bf2-0888-46dc-9c4d-986360579fba to disappear Jul 20 02:51:51.997: INFO: Pod downwardapi-volume-9d152bf2-0888-46dc-9c4d-986360579fba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:51:51.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5971" for this suite. • [SLOW TEST:6.296 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":189,"skipped":3160,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:51:52.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 20 02:52:00.194: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 02:52:00.215: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 02:52:02.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 02:52:02.219: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 02:52:04.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 02:52:04.219: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 02:52:06.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 02:52:06.219: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 02:52:08.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 02:52:08.260: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 02:52:10.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 02:52:10.219: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 02:52:12.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 02:52:12.220: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 02:52:14.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 02:52:14.219: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:52:14.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1723" for this suite. • [SLOW TEST:22.222 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":294,"completed":190,"skipped":3163,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:52:14.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:52:14.422: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:52:20.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4037" for this suite. • [SLOW TEST:6.451 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":294,"completed":191,"skipped":3164,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:52:20.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1aba2065-1bb2-44c2-9865-0d364e9337f0 STEP: Creating a pod to test consume secrets Jul 20 02:52:20.814: INFO: Waiting up to 5m0s for pod "pod-secrets-5fdd4093-483e-4d8b-9404-820b10cf3443" in namespace "secrets-3427" to be "Succeeded or Failed" Jul 20 02:52:20.846: INFO: Pod "pod-secrets-5fdd4093-483e-4d8b-9404-820b10cf3443": Phase="Pending", Reason="", readiness=false. Elapsed: 31.574594ms Jul 20 02:52:22.955: INFO: Pod "pod-secrets-5fdd4093-483e-4d8b-9404-820b10cf3443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140685781s Jul 20 02:52:24.958: INFO: Pod "pod-secrets-5fdd4093-483e-4d8b-9404-820b10cf3443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143932557s STEP: Saw pod success Jul 20 02:52:24.958: INFO: Pod "pod-secrets-5fdd4093-483e-4d8b-9404-820b10cf3443" satisfied condition "Succeeded or Failed" Jul 20 02:52:24.961: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5fdd4093-483e-4d8b-9404-820b10cf3443 container secret-volume-test: STEP: delete the pod Jul 20 02:52:25.016: INFO: Waiting for pod pod-secrets-5fdd4093-483e-4d8b-9404-820b10cf3443 to disappear Jul 20 02:52:25.061: INFO: Pod pod-secrets-5fdd4093-483e-4d8b-9404-820b10cf3443 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:52:25.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3427" for this suite. STEP: Destroying namespace "secret-namespace-8379" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":294,"completed":192,"skipped":3177,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:52:25.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jul 20 02:52:25.140: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:52:41.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6419" for this suite. • [SLOW TEST:16.312 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":294,"completed":193,"skipped":3182,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:52:41.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:52:41.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jul 20 02:52:42.006: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T02:52:42Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T02:52:42Z]] name:name1 resourceVersion:106548 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3dbead52-9894-4543-86aa-689c2c0f6f94] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jul 20 02:52:52.011: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T02:52:52Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T02:52:52Z]] name:name2 resourceVersion:106583 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:a5a58b8e-a380-40eb-a2be-d63c6ea3da73] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jul 20 02:53:02.019: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T02:52:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T02:53:02Z]] name:name1 resourceVersion:106608 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3dbead52-9894-4543-86aa-689c2c0f6f94] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jul 20 02:53:12.057: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T02:52:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T02:53:12Z]] name:name2 resourceVersion:106638 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:a5a58b8e-a380-40eb-a2be-d63c6ea3da73] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jul 20 02:53:22.066: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T02:52:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T02:53:02Z]] name:name1 resourceVersion:106669 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3dbead52-9894-4543-86aa-689c2c0f6f94] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jul 20 02:53:32.075: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T02:52:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-20T02:53:12Z]] name:name2 resourceVersion:106699 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:a5a58b8e-a380-40eb-a2be-d63c6ea3da73] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:53:42.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8732" for this suite. • [SLOW TEST:61.197 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":294,"completed":194,"skipped":3193,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:53:42.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Jul 20 02:53:42.697: INFO: Waiting up to 5m0s for pod "client-containers-d4bd63fb-1493-4ceb-973f-e29f240b9061" in namespace "containers-8297" to be "Succeeded or Failed" Jul 20 02:53:42.699: INFO: Pod "client-containers-d4bd63fb-1493-4ceb-973f-e29f240b9061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039323ms Jul 20 02:53:44.703: INFO: Pod "client-containers-d4bd63fb-1493-4ceb-973f-e29f240b9061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005702288s Jul 20 02:53:46.752: INFO: Pod "client-containers-d4bd63fb-1493-4ceb-973f-e29f240b9061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054261132s STEP: Saw pod success Jul 20 02:53:46.752: INFO: Pod "client-containers-d4bd63fb-1493-4ceb-973f-e29f240b9061" satisfied condition "Succeeded or Failed" Jul 20 02:53:46.754: INFO: Trying to get logs from node latest-worker2 pod client-containers-d4bd63fb-1493-4ceb-973f-e29f240b9061 container test-container: STEP: delete the pod Jul 20 02:53:46.798: INFO: Waiting for pod client-containers-d4bd63fb-1493-4ceb-973f-e29f240b9061 to disappear Jul 20 02:53:46.802: INFO: Pod client-containers-d4bd63fb-1493-4ceb-973f-e29f240b9061 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:53:46.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8297" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":294,"completed":195,"skipped":3215,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:53:46.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Jul 20 02:53:46.972: INFO: Waiting up to 5m0s for pod "client-containers-2c7e220f-a898-43cf-b966-646fcfc19e7a" in namespace "containers-9893" to be "Succeeded or Failed" Jul 20 02:53:46.983: INFO: Pod "client-containers-2c7e220f-a898-43cf-b966-646fcfc19e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.010975ms Jul 20 02:53:48.986: INFO: Pod "client-containers-2c7e220f-a898-43cf-b966-646fcfc19e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014316529s Jul 20 02:53:50.990: INFO: Pod "client-containers-2c7e220f-a898-43cf-b966-646fcfc19e7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018564677s STEP: Saw pod success Jul 20 02:53:50.990: INFO: Pod "client-containers-2c7e220f-a898-43cf-b966-646fcfc19e7a" satisfied condition "Succeeded or Failed" Jul 20 02:53:50.999: INFO: Trying to get logs from node latest-worker2 pod client-containers-2c7e220f-a898-43cf-b966-646fcfc19e7a container test-container: STEP: delete the pod Jul 20 02:53:51.012: INFO: Waiting for pod client-containers-2c7e220f-a898-43cf-b966-646fcfc19e7a to disappear Jul 20 02:53:51.017: INFO: Pod client-containers-2c7e220f-a898-43cf-b966-646fcfc19e7a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:53:51.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9893" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":294,"completed":196,"skipped":3225,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:53:51.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d878e19c-f2ab-48aa-a3ee-877f72387d10 STEP: Creating a pod to test consume secrets Jul 20 02:53:51.133: INFO: Waiting up to 5m0s for pod "pod-secrets-0f12ce55-f72e-44f3-8f39-4fe01757a0ee" in namespace "secrets-1545" to be "Succeeded or Failed" Jul 20 02:53:51.137: INFO: Pod "pod-secrets-0f12ce55-f72e-44f3-8f39-4fe01757a0ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362186ms Jul 20 02:53:53.141: INFO: Pod "pod-secrets-0f12ce55-f72e-44f3-8f39-4fe01757a0ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008025912s Jul 20 02:53:55.145: INFO: Pod "pod-secrets-0f12ce55-f72e-44f3-8f39-4fe01757a0ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012457497s STEP: Saw pod success Jul 20 02:53:55.145: INFO: Pod "pod-secrets-0f12ce55-f72e-44f3-8f39-4fe01757a0ee" satisfied condition "Succeeded or Failed" Jul 20 02:53:55.148: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-0f12ce55-f72e-44f3-8f39-4fe01757a0ee container secret-volume-test: STEP: delete the pod Jul 20 02:53:55.220: INFO: Waiting for pod pod-secrets-0f12ce55-f72e-44f3-8f39-4fe01757a0ee to disappear Jul 20 02:53:55.227: INFO: Pod pod-secrets-0f12ce55-f72e-44f3-8f39-4fe01757a0ee no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:53:55.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1545" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":197,"skipped":3241,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:53:55.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 20 02:53:55.295: INFO: Waiting up to 5m0s for pod "downward-api-4f0212fe-3764-4b2d-af7b-f9d471366097" in namespace "downward-api-9823" to be "Succeeded or Failed" Jul 20 02:53:55.405: INFO: Pod "downward-api-4f0212fe-3764-4b2d-af7b-f9d471366097": Phase="Pending", Reason="", readiness=false. Elapsed: 109.597274ms Jul 20 02:53:57.409: INFO: Pod "downward-api-4f0212fe-3764-4b2d-af7b-f9d471366097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11383388s Jul 20 02:53:59.413: INFO: Pod "downward-api-4f0212fe-3764-4b2d-af7b-f9d471366097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117448099s STEP: Saw pod success Jul 20 02:53:59.413: INFO: Pod "downward-api-4f0212fe-3764-4b2d-af7b-f9d471366097" satisfied condition "Succeeded or Failed" Jul 20 02:53:59.415: INFO: Trying to get logs from node latest-worker2 pod downward-api-4f0212fe-3764-4b2d-af7b-f9d471366097 container dapi-container: STEP: delete the pod Jul 20 02:53:59.465: INFO: Waiting for pod downward-api-4f0212fe-3764-4b2d-af7b-f9d471366097 to disappear Jul 20 02:53:59.474: INFO: Pod downward-api-4f0212fe-3764-4b2d-af7b-f9d471366097 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:53:59.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9823" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":294,"completed":198,"skipped":3250,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:53:59.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-92a69db5-6c60-4e94-82b8-6f61eb156743 in namespace container-probe-2196 Jul 20 02:54:03.643: INFO: Started pod test-webserver-92a69db5-6c60-4e94-82b8-6f61eb156743 in namespace container-probe-2196 STEP: checking the pod's current state and verifying that restartCount is present Jul 20 02:54:03.647: INFO: Initial restart count of pod test-webserver-92a69db5-6c60-4e94-82b8-6f61eb156743 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:58:04.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2196" for this suite. • [SLOW TEST:245.338 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":199,"skipped":3251,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:58:04.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 02:58:09.599: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:58:09.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4923" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":200,"skipped":3254,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:58:09.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4840 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 20 02:58:09.731: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 20 02:58:09.791: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:58:12.133: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 02:58:13.821: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:15.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:17.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:19.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:21.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:23.796: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:25.796: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:27.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:29.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:31.794: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 02:58:33.796: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 20 02:58:33.802: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 20 02:58:39.935: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.35:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4840 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:58:39.935: INFO: >>> kubeConfig: /root/.kube/config I0720 02:58:39.974530 8 log.go:181] (0xc0026f7c30) (0xc001cac140) Create stream I0720 02:58:39.974566 8 log.go:181] (0xc0026f7c30) (0xc001cac140) Stream added, broadcasting: 1 I0720 02:58:39.976636 8 log.go:181] (0xc0026f7c30) Reply frame received for 1 I0720 02:58:39.976681 8 log.go:181] (0xc0026f7c30) (0xc002332e60) Create stream I0720 02:58:39.976694 8 log.go:181] (0xc0026f7c30) (0xc002332e60) Stream added, broadcasting: 3 I0720 02:58:39.977573 8 log.go:181] (0xc0026f7c30) Reply frame received for 3 I0720 02:58:39.977596 8 log.go:181] (0xc0026f7c30) (0xc001cac280) Create stream I0720 02:58:39.977601 8 log.go:181] (0xc0026f7c30) (0xc001cac280) Stream added, broadcasting: 5 I0720 02:58:39.978535 8 log.go:181] (0xc0026f7c30) Reply frame received for 5 I0720 02:58:40.039632 8 log.go:181] (0xc0026f7c30) Data frame received for 5 I0720 02:58:40.039693 8 log.go:181] (0xc001cac280) (5) Data frame handling I0720 02:58:40.039729 8 log.go:181] (0xc0026f7c30) Data frame received for 3 I0720 02:58:40.039749 8 log.go:181] (0xc002332e60) (3) Data frame handling I0720 02:58:40.039773 8 log.go:181] (0xc002332e60) (3) Data frame sent I0720 02:58:40.039789 8 log.go:181] (0xc0026f7c30) Data frame received for 3 I0720 02:58:40.039799 8 log.go:181] (0xc002332e60) (3) Data frame handling I0720 02:58:40.041169 8 log.go:181] (0xc0026f7c30) Data frame received for 1 I0720 02:58:40.041187 8 log.go:181] (0xc001cac140) (1) Data frame handling I0720 02:58:40.041196 8 log.go:181] (0xc001cac140) (1) Data frame sent I0720 02:58:40.041207 8 log.go:181] (0xc0026f7c30) (0xc001cac140) Stream removed, broadcasting: 1 I0720 02:58:40.041232 8 log.go:181] (0xc0026f7c30) Go away received I0720 02:58:40.041359 8 log.go:181] (0xc0026f7c30) (0xc001cac140) Stream removed, broadcasting: 1 I0720 02:58:40.041390 8 log.go:181] (0xc0026f7c30) (0xc002332e60) Stream removed, broadcasting: 3 I0720 02:58:40.041411 8 log.go:181] (0xc0026f7c30) (0xc001cac280) Stream removed, broadcasting: 5 Jul 20 02:58:40.041: INFO: Found all expected endpoints: [netserver-0] Jul 20 02:58:40.045: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.52:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4840 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 02:58:40.045: INFO: >>> kubeConfig: /root/.kube/config I0720 02:58:40.070801 8 log.go:181] (0xc002104370) (0xc002333400) Create stream I0720 02:58:40.070832 8 log.go:181] (0xc002104370) (0xc002333400) Stream added, broadcasting: 1 I0720 02:58:40.074609 8 log.go:181] (0xc002104370) Reply frame received for 1 I0720 02:58:40.074723 8 log.go:181] (0xc002104370) (0xc0017fc140) Create stream I0720 02:58:40.074785 8 log.go:181] (0xc002104370) (0xc0017fc140) Stream added, broadcasting: 3 I0720 02:58:40.076221 8 log.go:181] (0xc002104370) Reply frame received for 3 I0720 02:58:40.076262 8 log.go:181] (0xc002104370) (0xc001f0b2c0) Create stream I0720 02:58:40.076280 8 log.go:181] (0xc002104370) (0xc001f0b2c0) Stream added, broadcasting: 5 I0720 02:58:40.077127 8 log.go:181] (0xc002104370) Reply frame received for 5 I0720 02:58:40.148383 8 log.go:181] (0xc002104370) Data frame received for 3 I0720 02:58:40.148407 8 log.go:181] (0xc0017fc140) (3) Data frame handling I0720 02:58:40.148415 8 log.go:181] (0xc0017fc140) (3) Data frame sent I0720 02:58:40.148419 8 log.go:181] (0xc002104370) Data frame received for 3 I0720 02:58:40.148423 8 log.go:181] (0xc0017fc140) (3) Data frame handling I0720 02:58:40.148690 8 log.go:181] (0xc002104370) Data frame received for 5 I0720 02:58:40.148703 8 log.go:181] (0xc001f0b2c0) (5) Data frame handling I0720 02:58:40.150663 8 log.go:181] (0xc002104370) Data frame received for 1 I0720 02:58:40.150675 8 log.go:181] (0xc002333400) (1) Data frame handling I0720 02:58:40.150681 8 log.go:181] (0xc002333400) (1) Data frame sent I0720 02:58:40.150690 8 log.go:181] (0xc002104370) (0xc002333400) Stream removed, broadcasting: 1 I0720 02:58:40.150754 8 log.go:181] (0xc002104370) (0xc002333400) Stream removed, broadcasting: 1 I0720 02:58:40.150769 8 log.go:181] (0xc002104370) (0xc0017fc140) Stream removed, broadcasting: 3 I0720 02:58:40.150779 8 log.go:181] (0xc002104370) (0xc001f0b2c0) Stream removed, broadcasting: 5 Jul 20 02:58:40.150: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:58:40.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0720 02:58:40.150922 8 log.go:181] (0xc002104370) Go away received STEP: Destroying namespace "pod-network-test-4840" for this suite. • [SLOW TEST:30.492 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":201,"skipped":3263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:58:40.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8620.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8620.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8620.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8620.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8620.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8620.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 02:58:48.521: INFO: DNS probes using dns-8620/dns-test-d2b8055c-aa8e-4cd1-ae72-b21028a3bdd8 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:58:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8620" for this suite. • [SLOW TEST:9.435 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":294,"completed":202,"skipped":3293,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:58:49.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 02:58:50.046: INFO: Creating deployment "webserver-deployment" Jul 20 02:58:50.132: INFO: Waiting for observed generation 1 Jul 20 02:58:52.581: INFO: Waiting for all required pods to come up Jul 20 02:58:52.691: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jul 20 02:59:02.700: INFO: Waiting for deployment "webserver-deployment" to complete Jul 20 02:59:02.706: INFO: Updating deployment "webserver-deployment" with a non-existent image Jul 20 02:59:02.712: INFO: Updating deployment webserver-deployment Jul 20 02:59:02.712: INFO: Waiting for observed generation 2 Jul 20 02:59:05.020: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jul 20 02:59:05.023: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jul 20 02:59:05.027: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 20 02:59:05.034: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jul 20 02:59:05.034: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jul 20 02:59:05.036: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 20 02:59:05.040: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jul 20 02:59:05.040: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jul 20 02:59:05.047: INFO: Updating deployment webserver-deployment Jul 20 02:59:05.047: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jul 20 02:59:05.553: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jul 20 02:59:06.177: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 20 02:59:07.444: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9862 /apis/apps/v1/namespaces/deployment-9862/deployments/webserver-deployment 17147ee9-d201-479f-9a74-578d5b56363c 108125 3 2020-07-20 02:58:50 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00481f9c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-07-20 02:59:03 +0000 UTC,LastTransitionTime:2020-07-20 02:58:50 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-20 02:59:05 +0000 UTC,LastTransitionTime:2020-07-20 02:59:05 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jul 20 02:59:07.613: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9862 /apis/apps/v1/namespaces/deployment-9862/replicasets/webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 108176 3 2020-07-20 02:59:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 17147ee9-d201-479f-9a74-578d5b56363c 0xc003d1d5b7 0xc003d1d5b8}] [] [{kube-controller-manager Update apps/v1 2020-07-20 02:59:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17147ee9-d201-479f-9a74-578d5b56363c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003d1d638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:59:07.613: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jul 20 02:59:07.614: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-9862 /apis/apps/v1/namespaces/deployment-9862/replicasets/webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 108169 3 2020-07-20 02:58:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 17147ee9-d201-479f-9a74-578d5b56363c 0xc003d1d697 0xc003d1d698}] [] [{kube-controller-manager Update apps/v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17147ee9-d201-479f-9a74-578d5b56363c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003d1d708 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jul 20 02:59:07.766: INFO: Pod "webserver-deployment-795d758f88-559fv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-559fv webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-559fv 5be8e676-c9b7-4085-a58a-9100dd81b2de 108165 0 2020-07-20 02:59:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc003d1dc47 0xc003d1dc48}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.60,StartTime:2020-07-20 02:59:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.766: INFO: Pod "webserver-deployment-795d758f88-chxgx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-chxgx webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-chxgx 0e4e116c-49f4-47ae-a3d2-539fabf22a94 108159 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc003d1de20 0xc003d1de21}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.766: INFO: Pod "webserver-deployment-795d758f88-h7mnl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-h7mnl webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-h7mnl 7b3ecd56-b1d8-4f90-9e7c-fb416796cc7b 108144 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc003d1df60 0xc003d1df61}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.767: INFO: Pod "webserver-deployment-795d758f88-lbmbg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lbmbg webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-lbmbg 72f7b5ac-8a69-45d5-8a7a-09639d25d07f 108088 0 2020-07-20 02:59:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc002306240 0xc002306241}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-07-20 02:59:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.767: INFO: Pod "webserver-deployment-795d758f88-lgn9q" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lgn9q webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-lgn9q 83a4e548-e53f-4c3f-9361-334c75357ebc 108157 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc002306910 0xc002306911}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.767: INFO: Pod "webserver-deployment-795d758f88-p6vgv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-p6vgv webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-p6vgv 0f4f7f6c-403f-4d0b-8afe-f72bc5bb7ef8 108145 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc002306e00 0xc002306e01}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.767: INFO: Pod "webserver-deployment-795d758f88-qd6qs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qd6qs webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-qd6qs 17b627cb-4ef5-4336-8bbd-5056787f499d 108094 0 2020-07-20 02:59:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc002307a80 0xc002307a81}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-07-20 02:59:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.768: INFO: Pod "webserver-deployment-795d758f88-qkmcx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qkmcx webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-qkmcx 997c0dde-0633-4f9f-a6b4-2fdd2d9bb9a4 108156 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc002307dd0 0xc002307dd1}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.768: INFO: Pod "webserver-deployment-795d758f88-qlx9x" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qlx9x webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-qlx9x 1bcfc6c6-c064-44a6-9097-e9a497ba43bc 108175 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc0034ba0a0 0xc0034ba0a1}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-07-20 02:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.768: INFO: Pod "webserver-deployment-795d758f88-skc4f" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-skc4f webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-skc4f 08f54c67-849a-4cea-85da-c0e4a23f3b8b 108163 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc0034ba6b0 0xc0034ba6b1}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.769: INFO: Pod "webserver-deployment-795d758f88-snwp8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-snwp8 webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-snwp8 6bf194ee-d759-47ea-9085-0522c2e6aaa7 108076 0 2020-07-20 02:59:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc0034ba7f0 0xc0034ba7f1}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-07-20 02:59:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.770: INFO: Pod "webserver-deployment-795d758f88-ws7n7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ws7n7 webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-ws7n7 1fcdadd6-d478-45b9-a4b6-220a7bd8e67d 108162 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc0034ba9a0 0xc0034ba9a1}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.770: INFO: Pod "webserver-deployment-795d758f88-xkff4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xkff4 webserver-deployment-795d758f88- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-795d758f88-xkff4 811a11b0-92e5-4c68-b0fe-483edf0dff5f 108092 0 2020-07-20 02:59:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3032324e-de40-46bc-a350-ce61bb0ed7ea 0xc0034baae0 0xc0034baae1}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3032324e-de40-46bc-a350-ce61bb0ed7ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-07-20 02:59:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.770: INFO: Pod "webserver-deployment-dd94f59b7-299xd" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-299xd webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-299xd 993b93a2-91e6-46e9-a4aa-1bb4c73fc8ab 108182 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bac80 0xc0034bac81}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-07-20 02:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.770: INFO: Pod "webserver-deployment-dd94f59b7-6gpzk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6gpzk webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-6gpzk 3f851b93-cf59-42ad-804d-2e6902cd894c 108035 0 2020-07-20 02:58:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bae07 0xc0034bae08}] [] [{kube-controller-manager Update v1 2020-07-20 02:58:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.39,StartTime:2020-07-20 02:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:59:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3c2d2aafdf44fb06a1a42094aae3b869070149e9719fade97098da332b8f9886,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.771: INFO: Pod "webserver-deployment-dd94f59b7-7bktw" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7bktw webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-7bktw 32f6e1e1-1e3b-4bb4-bf50-3eee3b956cdc 108187 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bafb7 0xc0034bafb8}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-07-20 02:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.771: INFO: Pod "webserver-deployment-dd94f59b7-8nqh2" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8nqh2 webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-8nqh2 b87d01c9-a042-409b-a333-06bd9c0d0601 107987 0 2020-07-20 02:58:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bb147 0xc0034bb148}] [] [{kube-controller-manager Update v1 2020-07-20 02:58:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:58:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.55,StartTime:2020-07-20 02:58:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:58:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e3f0f42695cbc0f26a52fd77150fa50823a364e47356eb1897a1c9532aec67d1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.771: INFO: Pod "webserver-deployment-dd94f59b7-9bxck" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9bxck webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-9bxck e897dcae-21a4-4ca5-aa51-5321c5ca8fee 108141 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bb2f7 0xc0034bb2f8}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.771: INFO: Pod "webserver-deployment-dd94f59b7-9n7mb" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9n7mb webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-9n7mb 0d016b43-5814-47b0-ad9c-ef00749b6f32 108021 0 2020-07-20 02:58:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bb420 0xc0034bb421}] [] [{kube-controller-manager Update v1 2020-07-20 02:58:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.56\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.56,StartTime:2020-07-20 02:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:59:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9738e0c94baab04face7d2b4411093385d820f5390939e710deaccb55d750e5a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.771: INFO: Pod "webserver-deployment-dd94f59b7-crqfp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-crqfp webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-crqfp 36df2ff3-a839-426e-9b11-46f9f00c5f18 108030 0 2020-07-20 02:58:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bb5c7 0xc0034bb5c8}] [] [{kube-controller-manager Update v1 2020-07-20 02:58:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.57,StartTime:2020-07-20 02:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:59:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://58fd57d97ff70bd60349d5dbd66ebcd2397f55ab4d9215956cd05a18377e8c43,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.771: INFO: Pod "webserver-deployment-dd94f59b7-d26cs" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-d26cs webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-d26cs ad655183-43b6-4b05-b7a0-f0e3ccc88088 108130 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bb777 0xc0034bb778}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.772: INFO: Pod "webserver-deployment-dd94f59b7-fvpmj" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fvpmj webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-fvpmj 49d77723-61f6-43d9-a731-cc0dcb9a36d6 108037 0 2020-07-20 02:58:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bb8a0 0xc0034bb8a1}] [] [{kube-controller-manager Update v1 2020-07-20 02:58:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.36\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.36,StartTime:2020-07-20 02:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:59:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a2fe52aec00f9ba6380c0a9555b246a33072d65c9ede07dd81141b9df2320bb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.772: INFO: Pod "webserver-deployment-dd94f59b7-ktvzc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ktvzc webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-ktvzc 30abcc21-990f-4a6a-a853-6ab5a44b9528 108127 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bba47 0xc0034bba48}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.772: INFO: Pod "webserver-deployment-dd94f59b7-mhlfv" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mhlfv webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-mhlfv 8161f05e-18e5-4b09-bfff-36483bdbc8ef 108154 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bbb90 0xc0034bbb91}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.772: INFO: Pod "webserver-deployment-dd94f59b7-mv765" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mv765 webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-mv765 f6cbfbd0-baf4-46ac-8493-ea0809ebd955 108153 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bbcc0 0xc0034bbcc1}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.772: INFO: Pod "webserver-deployment-dd94f59b7-mvrp2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mvrp2 webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-mvrp2 5ec50c52-435e-430a-860d-adff3805b592 108155 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bbdf0 0xc0034bbdf1}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.773: INFO: Pod "webserver-deployment-dd94f59b7-q55ft" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-q55ft webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-q55ft 7841e1d3-1346-4a7b-8899-3f853f1cee7e 108041 0 2020-07-20 02:58:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0034bbf20 0xc0034bbf21}] [] [{kube-controller-manager Update v1 2020-07-20 02:58:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.40,StartTime:2020-07-20 02:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:59:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://226a41473a944e06c00fad3a1e9e3be7eaec334a22af6494b92c62245255ea48,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.773: INFO: Pod "webserver-deployment-dd94f59b7-qsmgh" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qsmgh webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-qsmgh 367becde-be51-4689-ae09-832d46dbce65 108128 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0007a0517 0xc0007a0518}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.773: INFO: Pod "webserver-deployment-dd94f59b7-r8b8r" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-r8b8r webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-r8b8r 889c17c2-d1f4-4ca1-983d-2f26cc2d39d3 108016 0 2020-07-20 02:58:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc00242aef0 0xc00242aef1}] [] [{kube-controller-manager Update v1 2020-07-20 02:58:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.38,StartTime:2020-07-20 02:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:59:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e6d097252ce8c95fdc7c43092055c7fea3e97f27e7aed047ae3b2f100df9cfb8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.773: INFO: Pod "webserver-deployment-dd94f59b7-v6gf9" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-v6gf9 webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-v6gf9 1408764b-affa-413a-987f-2e683fd0bbf6 108008 0 2020-07-20 02:58:50 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc00242bde7 0xc00242bde8}] [] [{kube-controller-manager Update v1 2020-07-20 02:58:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-20 02:59:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.37\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.37,StartTime:2020-07-20 02:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 02:59:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ed2d67906a597689aea7f18145818c9c215f83c1f1cb58f546c64bdcd3048fd7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.773: INFO: Pod "webserver-deployment-dd94f59b7-vcsdp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vcsdp webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-vcsdp 1685c36f-821a-4216-ae2a-023fdcc0c745 108161 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc00242bf97 0xc00242bf98}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.774: INFO: Pod "webserver-deployment-dd94f59b7-xz2mg" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xz2mg webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-xz2mg 82e7b81d-9cee-440c-b95a-e5db171dc395 108143 0 2020-07-20 02:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc0026460c0 0xc0026460c1}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 02:59:07.774: INFO: Pod "webserver-deployment-dd94f59b7-z2gvv" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-z2gvv webserver-deployment-dd94f59b7- deployment-9862 /api/v1/namespaces/deployment-9862/pods/webserver-deployment-dd94f59b7-z2gvv 71685c8b-d030-42dc-89f6-d52a439742be 108158 0 2020-07-20 02:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8dbb2d45-c71c-47ed-85d1-0507be5d44d1 0xc002646370 0xc002646371}] [] [{kube-controller-manager Update v1 2020-07-20 02:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8dbb2d45-c71c-47ed-85d1-0507be5d44d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sq9jp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sq9jp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sq9jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 02:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:59:07.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9862" for this suite. • [SLOW TEST:18.517 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":294,"completed":203,"skipped":3298,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:59:08.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jul 20 02:59:09.928: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jul 20 02:59:26.780: INFO: >>> kubeConfig: /root/.kube/config Jul 20 02:59:30.553: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:59:43.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7834" for this suite. • [SLOW TEST:35.013 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":294,"completed":204,"skipped":3301,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:59:43.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Jul 20 02:59:47.190: INFO: Pod pod-hostip-d343becf-f35c-43c3-a92b-a28a70d2a5e5 has hostIP: 172.18.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:59:47.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2021" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":294,"completed":205,"skipped":3309,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:59:47.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 20 02:59:47.460: INFO: Waiting up to 5m0s for pod "downward-api-d69c2903-7090-43a1-84fb-8a950c8c81bf" in namespace "downward-api-1433" to be "Succeeded or Failed" Jul 20 02:59:47.468: INFO: Pod "downward-api-d69c2903-7090-43a1-84fb-8a950c8c81bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.428715ms Jul 20 02:59:49.472: INFO: Pod "downward-api-d69c2903-7090-43a1-84fb-8a950c8c81bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011933565s Jul 20 02:59:51.477: INFO: Pod "downward-api-d69c2903-7090-43a1-84fb-8a950c8c81bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017031798s STEP: Saw pod success Jul 20 02:59:51.477: INFO: Pod "downward-api-d69c2903-7090-43a1-84fb-8a950c8c81bf" satisfied condition "Succeeded or Failed" Jul 20 02:59:51.480: INFO: Trying to get logs from node latest-worker2 pod downward-api-d69c2903-7090-43a1-84fb-8a950c8c81bf container dapi-container: STEP: delete the pod Jul 20 02:59:51.521: INFO: Waiting for pod downward-api-d69c2903-7090-43a1-84fb-8a950c8c81bf to disappear Jul 20 02:59:51.549: INFO: Pod downward-api-d69c2903-7090-43a1-84fb-8a950c8c81bf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:59:51.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1433" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":294,"completed":206,"skipped":3325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:59:51.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 02:59:51.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7787" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":294,"completed":207,"skipped":3351,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 02:59:51.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 20 02:59:52.016: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:59:52.018: INFO: Number of nodes with available pods: 0 Jul 20 02:59:52.019: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:59:53.201: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:59:53.204: INFO: Number of nodes with available pods: 0 Jul 20 02:59:53.204: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:59:54.023: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:59:54.026: INFO: Number of nodes with available pods: 0 Jul 20 02:59:54.026: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:59:55.199: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:59:55.203: INFO: Number of nodes with available pods: 0 Jul 20 02:59:55.203: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:59:56.023: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:59:56.031: INFO: Number of nodes with available pods: 0 Jul 20 02:59:56.032: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:59:57.071: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:59:57.075: INFO: Number of nodes with available pods: 1 Jul 20 02:59:57.075: INFO: Node latest-worker2 is running more than one daemon pod Jul 20 02:59:58.039: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:59:58.177: INFO: Number of nodes with available pods: 2 Jul 20 02:59:58.177: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 20 02:59:58.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:59:58.327: INFO: Number of nodes with available pods: 1 Jul 20 02:59:58.327: INFO: Node latest-worker is running more than one daemon pod Jul 20 02:59:59.332: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 02:59:59.335: INFO: Number of nodes with available pods: 1 Jul 20 02:59:59.335: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:00:00.404: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:00:00.408: INFO: Number of nodes with available pods: 1 Jul 20 03:00:00.408: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:00:01.332: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:00:01.336: INFO: Number of nodes with available pods: 1 Jul 20 03:00:01.336: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:00:02.332: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:00:02.336: INFO: Number of nodes with available pods: 1 Jul 20 03:00:02.336: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:00:03.336: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:00:03.340: INFO: Number of nodes with available pods: 1 Jul 20 03:00:03.340: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:00:04.332: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:00:04.335: INFO: Number of nodes with available pods: 1 Jul 20 03:00:04.335: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:00:05.331: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:00:05.334: INFO: Number of nodes with available pods: 1 Jul 20 03:00:05.334: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:00:06.333: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:00:06.336: INFO: Number of nodes with available pods: 1 Jul 20 03:00:06.336: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:00:07.335: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:00:07.340: INFO: Number of nodes with available pods: 2 Jul 20 03:00:07.340: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3166, will wait for the garbage collector to delete the pods Jul 20 03:00:07.400: INFO: Deleting DaemonSet.extensions daemon-set took: 6.373176ms Jul 20 03:00:07.800: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.227622ms Jul 20 03:00:23.304: INFO: Number of nodes with available pods: 0 Jul 20 03:00:23.304: INFO: Number of running nodes: 0, number of available pods: 0 Jul 20 03:00:23.308: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3166/daemonsets","resourceVersion":"108774"},"items":null} Jul 20 03:00:23.311: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3166/pods","resourceVersion":"108774"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:00:23.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3166" for this suite. • [SLOW TEST:31.585 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":294,"completed":208,"skipped":3351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:00:23.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:00:23.417: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-eb58dee8-616f-4365-ad6d-12663d552def" in namespace "security-context-test-2940" to be "Succeeded or Failed" Jul 20 03:00:23.439: INFO: Pod "busybox-privileged-false-eb58dee8-616f-4365-ad6d-12663d552def": Phase="Pending", Reason="", readiness=false. Elapsed: 21.956522ms Jul 20 03:00:25.764: INFO: Pod "busybox-privileged-false-eb58dee8-616f-4365-ad6d-12663d552def": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346462676s Jul 20 03:00:27.768: INFO: Pod "busybox-privileged-false-eb58dee8-616f-4365-ad6d-12663d552def": Phase="Running", Reason="", readiness=true. Elapsed: 4.350632406s Jul 20 03:00:29.772: INFO: Pod "busybox-privileged-false-eb58dee8-616f-4365-ad6d-12663d552def": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.354639918s Jul 20 03:00:29.772: INFO: Pod "busybox-privileged-false-eb58dee8-616f-4365-ad6d-12663d552def" satisfied condition "Succeeded or Failed" Jul 20 03:00:29.779: INFO: Got logs for pod "busybox-privileged-false-eb58dee8-616f-4365-ad6d-12663d552def": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:00:29.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2940" for this suite. • [SLOW TEST:6.490 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":209,"skipped":3390,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:00:29.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:00:34.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-52" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":210,"skipped":3395,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:00:34.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 20 03:00:34.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6351' Jul 20 03:00:39.354: INFO: stderr: "" Jul 20 03:00:39.354: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jul 20 03:00:39.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-6351' Jul 20 03:00:39.489: INFO: stderr: "" Jul 20 03:00:39.490: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-20T03:00:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-20T03:00:39Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-20T03:00:39Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6351\",\n \"resourceVersion\": \"108888\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6351/pods/e2e-test-httpd-pod\",\n \"uid\": \"8c707eb1-bfdc-4652-a889-8da686a7d9d2\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-wtkcn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-wtkcn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-wtkcn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-20T03:00:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-20T03:00:39Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-20T03:00:39Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-20T03:00:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-20T03:00:39Z\"\n }\n}\n" Jul 20 03:00:39.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-6351' Jul 20 03:00:39.814: INFO: stderr: "W0720 03:00:39.561710 2634 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Jul 20 03:00:39.814: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jul 20 03:00:39.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6351' Jul 20 03:00:43.185: INFO: stderr: "" Jul 20 03:00:43.185: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:00:43.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6351" for this suite. • [SLOW TEST:9.052 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:914 should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":294,"completed":211,"skipped":3405,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:00:43.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Jul 20 03:00:43.337: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:00:43.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2436" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":294,"completed":212,"skipped":3410,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:00:43.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 20 03:00:43.526: INFO: Waiting up to 5m0s for pod "pod-a8f617dc-1a63-46c4-90f9-51dcde5d32c2" in namespace "emptydir-9969" to be "Succeeded or Failed" Jul 20 03:00:43.530: INFO: Pod "pod-a8f617dc-1a63-46c4-90f9-51dcde5d32c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.382947ms Jul 20 03:00:45.546: INFO: Pod "pod-a8f617dc-1a63-46c4-90f9-51dcde5d32c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01980841s Jul 20 03:00:47.550: INFO: Pod "pod-a8f617dc-1a63-46c4-90f9-51dcde5d32c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024121243s STEP: Saw pod success Jul 20 03:00:47.551: INFO: Pod "pod-a8f617dc-1a63-46c4-90f9-51dcde5d32c2" satisfied condition "Succeeded or Failed" Jul 20 03:00:47.553: INFO: Trying to get logs from node latest-worker2 pod pod-a8f617dc-1a63-46c4-90f9-51dcde5d32c2 container test-container: STEP: delete the pod Jul 20 03:00:47.606: INFO: Waiting for pod pod-a8f617dc-1a63-46c4-90f9-51dcde5d32c2 to disappear Jul 20 03:00:47.611: INFO: Pod pod-a8f617dc-1a63-46c4-90f9-51dcde5d32c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:00:47.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9969" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":213,"skipped":3411,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:00:47.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4883.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4883.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 03:00:53.750: INFO: DNS probes using dns-4883/dns-test-a9190cae-7ca7-4500-96eb-150d91e48317 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:00:53.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4883" for this suite. • [SLOW TEST:6.212 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":294,"completed":214,"skipped":3425,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:00:53.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1790 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1790 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1790 Jul 20 03:00:54.426: INFO: Found 0 stateful pods, waiting for 1 Jul 20 03:01:04.431: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 20 03:01:04.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1790 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 03:01:04.717: INFO: stderr: "I0720 03:01:04.573509 2689 log.go:181] (0xc000169080) (0xc000d05a40) Create stream\nI0720 03:01:04.573565 2689 log.go:181] (0xc000169080) (0xc000d05a40) Stream added, broadcasting: 1\nI0720 03:01:04.578302 2689 log.go:181] (0xc000169080) Reply frame received for 1\nI0720 03:01:04.578398 2689 log.go:181] (0xc000169080) (0xc000c0d0e0) Create stream\nI0720 03:01:04.578419 2689 log.go:181] (0xc000169080) (0xc000c0d0e0) Stream added, broadcasting: 3\nI0720 03:01:04.579527 2689 log.go:181] (0xc000169080) Reply frame received for 3\nI0720 03:01:04.579558 2689 log.go:181] (0xc000169080) (0xc000194aa0) Create stream\nI0720 03:01:04.579575 2689 log.go:181] (0xc000169080) (0xc000194aa0) Stream added, broadcasting: 5\nI0720 03:01:04.580680 2689 log.go:181] (0xc000169080) Reply frame received for 5\nI0720 03:01:04.682604 2689 log.go:181] (0xc000169080) Data frame received for 5\nI0720 03:01:04.682638 2689 log.go:181] (0xc000194aa0) (5) Data frame handling\nI0720 03:01:04.682669 2689 log.go:181] (0xc000194aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 03:01:04.706837 2689 log.go:181] (0xc000169080) Data frame received for 3\nI0720 03:01:04.706876 2689 log.go:181] (0xc000c0d0e0) (3) Data frame handling\nI0720 03:01:04.706917 2689 log.go:181] (0xc000c0d0e0) (3) Data frame sent\nI0720 03:01:04.707031 2689 log.go:181] (0xc000169080) Data frame received for 3\nI0720 03:01:04.707056 2689 log.go:181] (0xc000c0d0e0) (3) Data frame handling\nI0720 03:01:04.707284 2689 log.go:181] (0xc000169080) Data frame received for 5\nI0720 03:01:04.707296 2689 log.go:181] (0xc000194aa0) (5) Data frame handling\nI0720 03:01:04.712317 2689 log.go:181] (0xc000169080) Data frame received for 1\nI0720 03:01:04.712342 2689 log.go:181] (0xc000d05a40) (1) Data frame handling\nI0720 03:01:04.712360 2689 log.go:181] (0xc000d05a40) (1) Data frame sent\nI0720 03:01:04.712370 2689 log.go:181] (0xc000169080) (0xc000d05a40) Stream removed, broadcasting: 1\nI0720 03:01:04.712387 2689 log.go:181] (0xc000169080) Go away received\nI0720 03:01:04.712899 2689 log.go:181] (0xc000169080) (0xc000d05a40) Stream removed, broadcasting: 1\nI0720 03:01:04.712922 2689 log.go:181] (0xc000169080) (0xc000c0d0e0) Stream removed, broadcasting: 3\nI0720 03:01:04.712935 2689 log.go:181] (0xc000169080) (0xc000194aa0) Stream removed, broadcasting: 5\n" Jul 20 03:01:04.718: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 03:01:04.718: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 03:01:04.721: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 20 03:01:14.726: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 20 03:01:14.726: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 03:01:14.759: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999593s Jul 20 03:01:15.764: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.975482573s Jul 20 03:01:16.789: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.970693803s Jul 20 03:01:17.794: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.946168045s Jul 20 03:01:18.797: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.941359789s Jul 20 03:01:19.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.93760732s Jul 20 03:01:20.807: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.93262898s Jul 20 03:01:21.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.927659818s Jul 20 03:01:22.816: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.922659777s Jul 20 03:01:23.820: INFO: Verifying statefulset ss doesn't scale past 1 for another 918.698162ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1790 Jul 20 03:01:24.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1790 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 03:01:25.037: INFO: stderr: "I0720 03:01:24.942912 2707 log.go:181] (0xc000859340) (0xc000aa1220) Create stream\nI0720 03:01:24.942974 2707 log.go:181] (0xc000859340) (0xc000aa1220) Stream added, broadcasting: 1\nI0720 03:01:24.945364 2707 log.go:181] (0xc000859340) Reply frame received for 1\nI0720 03:01:24.945406 2707 log.go:181] (0xc000859340) (0xc000af9400) Create stream\nI0720 03:01:24.945416 2707 log.go:181] (0xc000859340) (0xc000af9400) Stream added, broadcasting: 3\nI0720 03:01:24.946362 2707 log.go:181] (0xc000859340) Reply frame received for 3\nI0720 03:01:24.946385 2707 log.go:181] (0xc000859340) (0xc000eaa1e0) Create stream\nI0720 03:01:24.946393 2707 log.go:181] (0xc000859340) (0xc000eaa1e0) Stream added, broadcasting: 5\nI0720 03:01:24.947156 2707 log.go:181] (0xc000859340) Reply frame received for 5\nI0720 03:01:25.030539 2707 log.go:181] (0xc000859340) Data frame received for 5\nI0720 03:01:25.030594 2707 log.go:181] (0xc000eaa1e0) (5) Data frame handling\nI0720 03:01:25.030620 2707 log.go:181] (0xc000eaa1e0) (5) Data frame sent\nI0720 03:01:25.030653 2707 log.go:181] (0xc000859340) Data frame received for 5\nI0720 03:01:25.030671 2707 log.go:181] (0xc000eaa1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 03:01:25.030698 2707 log.go:181] (0xc000859340) Data frame received for 3\nI0720 03:01:25.030731 2707 log.go:181] (0xc000af9400) (3) Data frame handling\nI0720 03:01:25.030758 2707 log.go:181] (0xc000af9400) (3) Data frame sent\nI0720 03:01:25.030776 2707 log.go:181] (0xc000859340) Data frame received for 3\nI0720 03:01:25.030789 2707 log.go:181] (0xc000af9400) (3) Data frame handling\nI0720 03:01:25.031652 2707 log.go:181] (0xc000859340) Data frame received for 1\nI0720 03:01:25.031699 2707 log.go:181] (0xc000aa1220) (1) Data frame handling\nI0720 03:01:25.031746 2707 log.go:181] (0xc000aa1220) (1) Data frame sent\nI0720 03:01:25.031783 2707 log.go:181] (0xc000859340) (0xc000aa1220) Stream removed, broadcasting: 1\nI0720 03:01:25.031806 2707 log.go:181] (0xc000859340) Go away received\nI0720 03:01:25.032479 2707 log.go:181] (0xc000859340) (0xc000aa1220) Stream removed, broadcasting: 1\nI0720 03:01:25.032507 2707 log.go:181] (0xc000859340) (0xc000af9400) Stream removed, broadcasting: 3\nI0720 03:01:25.032519 2707 log.go:181] (0xc000859340) (0xc000eaa1e0) Stream removed, broadcasting: 5\n" Jul 20 03:01:25.038: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 03:01:25.038: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 03:01:25.058: INFO: Found 1 stateful pods, waiting for 3 Jul 20 03:01:35.062: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 03:01:35.062: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 03:01:35.062: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 20 03:01:35.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1790 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 03:01:35.286: INFO: stderr: "I0720 03:01:35.202514 2726 log.go:181] (0xc000c35c30) (0xc000aa5360) Create stream\nI0720 03:01:35.202576 2726 log.go:181] (0xc000c35c30) (0xc000aa5360) Stream added, broadcasting: 1\nI0720 03:01:35.205292 2726 log.go:181] (0xc000c35c30) Reply frame received for 1\nI0720 03:01:35.205334 2726 log.go:181] (0xc000c35c30) (0xc000864aa0) Create stream\nI0720 03:01:35.205351 2726 log.go:181] (0xc000c35c30) (0xc000864aa0) Stream added, broadcasting: 3\nI0720 03:01:35.206337 2726 log.go:181] (0xc000c35c30) Reply frame received for 3\nI0720 03:01:35.206368 2726 log.go:181] (0xc000c35c30) (0xc0004dff40) Create stream\nI0720 03:01:35.206377 2726 log.go:181] (0xc000c35c30) (0xc0004dff40) Stream added, broadcasting: 5\nI0720 03:01:35.207274 2726 log.go:181] (0xc000c35c30) Reply frame received for 5\nI0720 03:01:35.278111 2726 log.go:181] (0xc000c35c30) Data frame received for 5\nI0720 03:01:35.278159 2726 log.go:181] (0xc0004dff40) (5) Data frame handling\nI0720 03:01:35.278183 2726 log.go:181] (0xc0004dff40) (5) Data frame sent\nI0720 03:01:35.278230 2726 log.go:181] (0xc000c35c30) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 03:01:35.278255 2726 log.go:181] (0xc0004dff40) (5) Data frame handling\nI0720 03:01:35.278306 2726 log.go:181] (0xc000c35c30) Data frame received for 3\nI0720 03:01:35.278353 2726 log.go:181] (0xc000864aa0) (3) Data frame handling\nI0720 03:01:35.278380 2726 log.go:181] (0xc000864aa0) (3) Data frame sent\nI0720 03:01:35.278393 2726 log.go:181] (0xc000c35c30) Data frame received for 3\nI0720 03:01:35.278403 2726 log.go:181] (0xc000864aa0) (3) Data frame handling\nI0720 03:01:35.279951 2726 log.go:181] (0xc000c35c30) Data frame received for 1\nI0720 03:01:35.279986 2726 log.go:181] (0xc000aa5360) (1) Data frame handling\nI0720 03:01:35.280006 2726 log.go:181] (0xc000aa5360) (1) Data frame sent\nI0720 03:01:35.280026 2726 log.go:181] (0xc000c35c30) (0xc000aa5360) Stream removed, broadcasting: 1\nI0720 03:01:35.280061 2726 log.go:181] (0xc000c35c30) Go away received\nI0720 03:01:35.280537 2726 log.go:181] (0xc000c35c30) (0xc000aa5360) Stream removed, broadcasting: 1\nI0720 03:01:35.280560 2726 log.go:181] (0xc000c35c30) (0xc000864aa0) Stream removed, broadcasting: 3\nI0720 03:01:35.280572 2726 log.go:181] (0xc000c35c30) (0xc0004dff40) Stream removed, broadcasting: 5\n" Jul 20 03:01:35.286: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 03:01:35.286: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 03:01:35.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1790 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 03:01:35.512: INFO: stderr: "I0720 03:01:35.409293 2744 log.go:181] (0xc000922c60) (0xc000b5b7c0) Create stream\nI0720 03:01:35.409343 2744 log.go:181] (0xc000922c60) (0xc000b5b7c0) Stream added, broadcasting: 1\nI0720 03:01:35.417057 2744 log.go:181] (0xc000922c60) Reply frame received for 1\nI0720 03:01:35.417089 2744 log.go:181] (0xc000922c60) (0xc000857180) Create stream\nI0720 03:01:35.417098 2744 log.go:181] (0xc000922c60) (0xc000857180) Stream added, broadcasting: 3\nI0720 03:01:35.418024 2744 log.go:181] (0xc000922c60) Reply frame received for 3\nI0720 03:01:35.418085 2744 log.go:181] (0xc000922c60) (0xc0008106e0) Create stream\nI0720 03:01:35.418105 2744 log.go:181] (0xc000922c60) (0xc0008106e0) Stream added, broadcasting: 5\nI0720 03:01:35.418877 2744 log.go:181] (0xc000922c60) Reply frame received for 5\nI0720 03:01:35.474149 2744 log.go:181] (0xc000922c60) Data frame received for 5\nI0720 03:01:35.474170 2744 log.go:181] (0xc0008106e0) (5) Data frame handling\nI0720 03:01:35.474182 2744 log.go:181] (0xc0008106e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 03:01:35.502758 2744 log.go:181] (0xc000922c60) Data frame received for 5\nI0720 03:01:35.502782 2744 log.go:181] (0xc0008106e0) (5) Data frame handling\nI0720 03:01:35.502807 2744 log.go:181] (0xc000922c60) Data frame received for 3\nI0720 03:01:35.502841 2744 log.go:181] (0xc000857180) (3) Data frame handling\nI0720 03:01:35.502887 2744 log.go:181] (0xc000857180) (3) Data frame sent\nI0720 03:01:35.502907 2744 log.go:181] (0xc000922c60) Data frame received for 3\nI0720 03:01:35.502920 2744 log.go:181] (0xc000857180) (3) Data frame handling\nI0720 03:01:35.505379 2744 log.go:181] (0xc000922c60) Data frame received for 1\nI0720 03:01:35.505410 2744 log.go:181] (0xc000b5b7c0) (1) Data frame handling\nI0720 03:01:35.505424 2744 log.go:181] (0xc000b5b7c0) (1) Data frame sent\nI0720 03:01:35.505439 2744 log.go:181] (0xc000922c60) (0xc000b5b7c0) Stream removed, broadcasting: 1\nI0720 03:01:35.505480 2744 log.go:181] (0xc000922c60) Go away received\nI0720 03:01:35.505807 2744 log.go:181] (0xc000922c60) (0xc000b5b7c0) Stream removed, broadcasting: 1\nI0720 03:01:35.505824 2744 log.go:181] (0xc000922c60) (0xc000857180) Stream removed, broadcasting: 3\nI0720 03:01:35.505834 2744 log.go:181] (0xc000922c60) (0xc0008106e0) Stream removed, broadcasting: 5\n" Jul 20 03:01:35.512: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 03:01:35.512: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 03:01:35.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1790 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 03:01:35.748: INFO: stderr: "I0720 03:01:35.642098 2763 log.go:181] (0xc00088c160) (0xc000a33b80) Create stream\nI0720 03:01:35.642144 2763 log.go:181] (0xc00088c160) (0xc000a33b80) Stream added, broadcasting: 1\nI0720 03:01:35.647546 2763 log.go:181] (0xc00088c160) Reply frame received for 1\nI0720 03:01:35.647595 2763 log.go:181] (0xc00088c160) (0xc00050ab40) Create stream\nI0720 03:01:35.647609 2763 log.go:181] (0xc00088c160) (0xc00050ab40) Stream added, broadcasting: 3\nI0720 03:01:35.648558 2763 log.go:181] (0xc00088c160) Reply frame received for 3\nI0720 03:01:35.648597 2763 log.go:181] (0xc00088c160) (0xc000428780) Create stream\nI0720 03:01:35.648610 2763 log.go:181] (0xc00088c160) (0xc000428780) Stream added, broadcasting: 5\nI0720 03:01:35.649604 2763 log.go:181] (0xc00088c160) Reply frame received for 5\nI0720 03:01:35.705542 2763 log.go:181] (0xc00088c160) Data frame received for 5\nI0720 03:01:35.705563 2763 log.go:181] (0xc000428780) (5) Data frame handling\nI0720 03:01:35.705574 2763 log.go:181] (0xc000428780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 03:01:35.742920 2763 log.go:181] (0xc00088c160) Data frame received for 3\nI0720 03:01:35.743015 2763 log.go:181] (0xc00050ab40) (3) Data frame handling\nI0720 03:01:35.743038 2763 log.go:181] (0xc00050ab40) (3) Data frame sent\nI0720 03:01:35.743045 2763 log.go:181] (0xc00088c160) Data frame received for 3\nI0720 03:01:35.743049 2763 log.go:181] (0xc00050ab40) (3) Data frame handling\nI0720 03:01:35.743155 2763 log.go:181] (0xc00088c160) Data frame received for 5\nI0720 03:01:35.743180 2763 log.go:181] (0xc000428780) (5) Data frame handling\nI0720 03:01:35.744841 2763 log.go:181] (0xc00088c160) Data frame received for 1\nI0720 03:01:35.744859 2763 log.go:181] (0xc000a33b80) (1) Data frame handling\nI0720 03:01:35.744874 2763 log.go:181] (0xc000a33b80) (1) Data frame sent\nI0720 03:01:35.744886 2763 log.go:181] (0xc00088c160) (0xc000a33b80) Stream removed, broadcasting: 1\nI0720 03:01:35.745014 2763 log.go:181] (0xc00088c160) Go away received\nI0720 03:01:35.745106 2763 log.go:181] (0xc00088c160) (0xc000a33b80) Stream removed, broadcasting: 1\nI0720 03:01:35.745119 2763 log.go:181] (0xc00088c160) (0xc00050ab40) Stream removed, broadcasting: 3\nI0720 03:01:35.745125 2763 log.go:181] (0xc00088c160) (0xc000428780) Stream removed, broadcasting: 5\n" Jul 20 03:01:35.748: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 03:01:35.748: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 03:01:35.748: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 03:01:35.759: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jul 20 03:01:45.767: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 20 03:01:45.767: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 20 03:01:45.767: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 20 03:01:45.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999531s Jul 20 03:01:46.837: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.942018816s Jul 20 03:01:47.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.936531002s Jul 20 03:01:48.847: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.931148688s Jul 20 03:01:49.852: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.925622865s Jul 20 03:01:50.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.920891455s Jul 20 03:01:51.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.915786781s Jul 20 03:01:52.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.910433851s Jul 20 03:01:53.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.904060352s Jul 20 03:01:54.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 887.880199ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1790 Jul 20 03:01:55.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1790 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 03:01:56.145: INFO: stderr: "I0720 03:01:56.049503 2782 log.go:181] (0xc000d23130) (0xc000cb43c0) Create stream\nI0720 03:01:56.049558 2782 log.go:181] (0xc000d23130) (0xc000cb43c0) Stream added, broadcasting: 1\nI0720 03:01:56.054212 2782 log.go:181] (0xc000d23130) Reply frame received for 1\nI0720 03:01:56.054252 2782 log.go:181] (0xc000d23130) (0xc000ae50e0) Create stream\nI0720 03:01:56.054265 2782 log.go:181] (0xc000d23130) (0xc000ae50e0) Stream added, broadcasting: 3\nI0720 03:01:56.055104 2782 log.go:181] (0xc000d23130) Reply frame received for 3\nI0720 03:01:56.055124 2782 log.go:181] (0xc000d23130) (0xc000ae1d60) Create stream\nI0720 03:01:56.055131 2782 log.go:181] (0xc000d23130) (0xc000ae1d60) Stream added, broadcasting: 5\nI0720 03:01:56.055968 2782 log.go:181] (0xc000d23130) Reply frame received for 5\nI0720 03:01:56.138120 2782 log.go:181] (0xc000d23130) Data frame received for 3\nI0720 03:01:56.138175 2782 log.go:181] (0xc000ae50e0) (3) Data frame handling\nI0720 03:01:56.138190 2782 log.go:181] (0xc000ae50e0) (3) Data frame sent\nI0720 03:01:56.138202 2782 log.go:181] (0xc000d23130) Data frame received for 3\nI0720 03:01:56.138211 2782 log.go:181] (0xc000ae50e0) (3) Data frame handling\nI0720 03:01:56.138224 2782 log.go:181] (0xc000d23130) Data frame received for 5\nI0720 03:01:56.138234 2782 log.go:181] (0xc000ae1d60) (5) Data frame handling\nI0720 03:01:56.138245 2782 log.go:181] (0xc000ae1d60) (5) Data frame sent\nI0720 03:01:56.138255 2782 log.go:181] (0xc000d23130) Data frame received for 5\nI0720 03:01:56.138264 2782 log.go:181] (0xc000ae1d60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 03:01:56.139628 2782 log.go:181] (0xc000d23130) Data frame received for 1\nI0720 03:01:56.139675 2782 log.go:181] (0xc000cb43c0) (1) Data frame handling\nI0720 03:01:56.139703 2782 log.go:181] (0xc000cb43c0) (1) Data frame sent\nI0720 03:01:56.139735 2782 log.go:181] (0xc000d23130) (0xc000cb43c0) Stream removed, broadcasting: 1\nI0720 03:01:56.139761 2782 log.go:181] (0xc000d23130) Go away received\nI0720 03:01:56.140272 2782 log.go:181] (0xc000d23130) (0xc000cb43c0) Stream removed, broadcasting: 1\nI0720 03:01:56.140291 2782 log.go:181] (0xc000d23130) (0xc000ae50e0) Stream removed, broadcasting: 3\nI0720 03:01:56.140302 2782 log.go:181] (0xc000d23130) (0xc000ae1d60) Stream removed, broadcasting: 5\n" Jul 20 03:01:56.145: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 03:01:56.145: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 03:01:56.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1790 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 03:01:56.368: INFO: stderr: "I0720 03:01:56.292472 2800 log.go:181] (0xc0006fa000) (0xc0006aab40) Create stream\nI0720 03:01:56.292544 2800 log.go:181] (0xc0006fa000) (0xc0006aab40) Stream added, broadcasting: 1\nI0720 03:01:56.294450 2800 log.go:181] (0xc0006fa000) Reply frame received for 1\nI0720 03:01:56.294505 2800 log.go:181] (0xc0006fa000) (0xc0003b88c0) Create stream\nI0720 03:01:56.294520 2800 log.go:181] (0xc0006fa000) (0xc0003b88c0) Stream added, broadcasting: 3\nI0720 03:01:56.295664 2800 log.go:181] (0xc0006fa000) Reply frame received for 3\nI0720 03:01:56.295703 2800 log.go:181] (0xc0006fa000) (0xc0003b9900) Create stream\nI0720 03:01:56.295733 2800 log.go:181] (0xc0006fa000) (0xc0003b9900) Stream added, broadcasting: 5\nI0720 03:01:56.296645 2800 log.go:181] (0xc0006fa000) Reply frame received for 5\nI0720 03:01:56.361895 2800 log.go:181] (0xc0006fa000) Data frame received for 3\nI0720 03:01:56.361964 2800 log.go:181] (0xc0006fa000) Data frame received for 5\nI0720 03:01:56.362002 2800 log.go:181] (0xc0003b9900) (5) Data frame handling\nI0720 03:01:56.362016 2800 log.go:181] (0xc0003b9900) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 03:01:56.362025 2800 log.go:181] (0xc0006fa000) Data frame received for 5\nI0720 03:01:56.362046 2800 log.go:181] (0xc0003b9900) (5) Data frame handling\nI0720 03:01:56.362072 2800 log.go:181] (0xc0003b88c0) (3) Data frame handling\nI0720 03:01:56.362091 2800 log.go:181] (0xc0003b88c0) (3) Data frame sent\nI0720 03:01:56.362106 2800 log.go:181] (0xc0006fa000) Data frame received for 3\nI0720 03:01:56.362120 2800 log.go:181] (0xc0003b88c0) (3) Data frame handling\nI0720 03:01:56.363228 2800 log.go:181] (0xc0006fa000) Data frame received for 1\nI0720 03:01:56.363246 2800 log.go:181] (0xc0006aab40) (1) Data frame handling\nI0720 03:01:56.363259 2800 log.go:181] (0xc0006aab40) (1) Data frame sent\nI0720 03:01:56.363292 2800 log.go:181] (0xc0006fa000) (0xc0006aab40) Stream removed, broadcasting: 1\nI0720 03:01:56.363323 2800 log.go:181] (0xc0006fa000) Go away received\nI0720 03:01:56.363626 2800 log.go:181] (0xc0006fa000) (0xc0006aab40) Stream removed, broadcasting: 1\nI0720 03:01:56.363642 2800 log.go:181] (0xc0006fa000) (0xc0003b88c0) Stream removed, broadcasting: 3\nI0720 03:01:56.363652 2800 log.go:181] (0xc0006fa000) (0xc0003b9900) Stream removed, broadcasting: 5\n" Jul 20 03:01:56.368: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 03:01:56.368: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 03:01:56.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1790 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 03:01:56.576: INFO: stderr: "I0720 03:01:56.506300 2818 log.go:181] (0xc000530dc0) (0xc000989860) Create stream\nI0720 03:01:56.506360 2818 log.go:181] (0xc000530dc0) (0xc000989860) Stream added, broadcasting: 1\nI0720 03:01:56.511493 2818 log.go:181] (0xc000530dc0) Reply frame received for 1\nI0720 03:01:56.511521 2818 log.go:181] (0xc000530dc0) (0xc0003d6be0) Create stream\nI0720 03:01:56.511529 2818 log.go:181] (0xc000530dc0) (0xc0003d6be0) Stream added, broadcasting: 3\nI0720 03:01:56.512636 2818 log.go:181] (0xc000530dc0) Reply frame received for 3\nI0720 03:01:56.512686 2818 log.go:181] (0xc000530dc0) (0xc0003d7ea0) Create stream\nI0720 03:01:56.512706 2818 log.go:181] (0xc000530dc0) (0xc0003d7ea0) Stream added, broadcasting: 5\nI0720 03:01:56.513769 2818 log.go:181] (0xc000530dc0) Reply frame received for 5\nI0720 03:01:56.569147 2818 log.go:181] (0xc000530dc0) Data frame received for 3\nI0720 03:01:56.569185 2818 log.go:181] (0xc0003d6be0) (3) Data frame handling\nI0720 03:01:56.569218 2818 log.go:181] (0xc0003d6be0) (3) Data frame sent\nI0720 03:01:56.569276 2818 log.go:181] (0xc000530dc0) Data frame received for 5\nI0720 03:01:56.569293 2818 log.go:181] (0xc0003d7ea0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 03:01:56.569340 2818 log.go:181] (0xc000530dc0) Data frame received for 3\nI0720 03:01:56.569371 2818 log.go:181] (0xc0003d6be0) (3) Data frame handling\nI0720 03:01:56.569406 2818 log.go:181] (0xc0003d7ea0) (5) Data frame sent\nI0720 03:01:56.569430 2818 log.go:181] (0xc000530dc0) Data frame received for 5\nI0720 03:01:56.569449 2818 log.go:181] (0xc0003d7ea0) (5) Data frame handling\nI0720 03:01:56.570815 2818 log.go:181] (0xc000530dc0) Data frame received for 1\nI0720 03:01:56.570845 2818 log.go:181] (0xc000989860) (1) Data frame handling\nI0720 03:01:56.570862 2818 log.go:181] (0xc000989860) (1) Data frame sent\nI0720 03:01:56.570878 2818 log.go:181] (0xc000530dc0) (0xc000989860) Stream removed, broadcasting: 1\nI0720 03:01:56.570894 2818 log.go:181] (0xc000530dc0) Go away received\nI0720 03:01:56.571322 2818 log.go:181] (0xc000530dc0) (0xc000989860) Stream removed, broadcasting: 1\nI0720 03:01:56.571348 2818 log.go:181] (0xc000530dc0) (0xc0003d6be0) Stream removed, broadcasting: 3\nI0720 03:01:56.571360 2818 log.go:181] (0xc000530dc0) (0xc0003d7ea0) Stream removed, broadcasting: 5\n" Jul 20 03:01:56.577: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 03:01:56.577: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 03:01:56.577: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 20 03:02:26.592: INFO: Deleting all statefulset in ns statefulset-1790 Jul 20 03:02:26.596: INFO: Scaling statefulset ss to 0 Jul 20 03:02:26.606: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 03:02:26.609: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:02:26.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1790" for this suite. • [SLOW TEST:92.815 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":294,"completed":215,"skipped":3427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:02:26.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3880 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3880 STEP: creating replication controller externalsvc in namespace services-3880 I0720 03:02:27.043482 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3880, replica count: 2 I0720 03:02:30.093936 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 03:02:33.094220 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jul 20 03:02:33.174: INFO: Creating new exec pod Jul 20 03:02:37.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3880 execpod8876v -- /bin/sh -x -c nslookup nodeport-service.services-3880.svc.cluster.local' Jul 20 03:02:37.434: INFO: stderr: "I0720 03:02:37.341929 2836 log.go:181] (0xc0008533f0) (0xc000c0d5e0) Create stream\nI0720 03:02:37.342002 2836 log.go:181] (0xc0008533f0) (0xc000c0d5e0) Stream added, broadcasting: 1\nI0720 03:02:37.346668 2836 log.go:181] (0xc0008533f0) Reply frame received for 1\nI0720 03:02:37.346701 2836 log.go:181] (0xc0008533f0) (0xc000a61220) Create stream\nI0720 03:02:37.346711 2836 log.go:181] (0xc0008533f0) (0xc000a61220) Stream added, broadcasting: 3\nI0720 03:02:37.347470 2836 log.go:181] (0xc0008533f0) Reply frame received for 3\nI0720 03:02:37.347499 2836 log.go:181] (0xc0008533f0) (0xc000a5ce60) Create stream\nI0720 03:02:37.347509 2836 log.go:181] (0xc0008533f0) (0xc000a5ce60) Stream added, broadcasting: 5\nI0720 03:02:37.348240 2836 log.go:181] (0xc0008533f0) Reply frame received for 5\nI0720 03:02:37.420288 2836 log.go:181] (0xc0008533f0) Data frame received for 5\nI0720 03:02:37.420313 2836 log.go:181] (0xc000a5ce60) (5) Data frame handling\nI0720 03:02:37.420327 2836 log.go:181] (0xc000a5ce60) (5) Data frame sent\n+ nslookup nodeport-service.services-3880.svc.cluster.local\nI0720 03:02:37.426530 2836 log.go:181] (0xc0008533f0) Data frame received for 3\nI0720 03:02:37.426551 2836 log.go:181] (0xc000a61220) (3) Data frame handling\nI0720 03:02:37.426569 2836 log.go:181] (0xc000a61220) (3) Data frame sent\nI0720 03:02:37.426976 2836 log.go:181] (0xc0008533f0) Data frame received for 3\nI0720 03:02:37.426990 2836 log.go:181] (0xc000a61220) (3) Data frame handling\nI0720 03:02:37.427001 2836 log.go:181] (0xc000a61220) (3) Data frame sent\nI0720 03:02:37.427347 2836 log.go:181] (0xc0008533f0) Data frame received for 5\nI0720 03:02:37.427367 2836 log.go:181] (0xc000a5ce60) (5) Data frame handling\nI0720 03:02:37.427536 2836 log.go:181] (0xc0008533f0) Data frame received for 3\nI0720 03:02:37.427548 2836 log.go:181] (0xc000a61220) (3) Data frame handling\nI0720 03:02:37.429297 2836 log.go:181] (0xc0008533f0) Data frame received for 1\nI0720 03:02:37.429319 2836 log.go:181] (0xc000c0d5e0) (1) Data frame handling\nI0720 03:02:37.429333 2836 log.go:181] (0xc000c0d5e0) (1) Data frame sent\nI0720 03:02:37.429346 2836 log.go:181] (0xc0008533f0) (0xc000c0d5e0) Stream removed, broadcasting: 1\nI0720 03:02:37.429420 2836 log.go:181] (0xc0008533f0) Go away received\nI0720 03:02:37.429660 2836 log.go:181] (0xc0008533f0) (0xc000c0d5e0) Stream removed, broadcasting: 1\nI0720 03:02:37.429678 2836 log.go:181] (0xc0008533f0) (0xc000a61220) Stream removed, broadcasting: 3\nI0720 03:02:37.429689 2836 log.go:181] (0xc0008533f0) (0xc000a5ce60) Stream removed, broadcasting: 5\n" Jul 20 03:02:37.434: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3880.svc.cluster.local\tcanonical name = externalsvc.services-3880.svc.cluster.local.\nName:\texternalsvc.services-3880.svc.cluster.local\nAddress: 10.101.143.143\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3880, will wait for the garbage collector to delete the pods Jul 20 03:02:37.493: INFO: Deleting ReplicationController externalsvc took: 6.182923ms Jul 20 03:02:37.594: INFO: Terminating ReplicationController externalsvc pods took: 100.318257ms Jul 20 03:02:53.347: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:02:53.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3880" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:26.722 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":294,"completed":216,"skipped":3473,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:02:53.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:02:53.458: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33a8ee11-2ce3-44af-b41a-4d45db943168" in namespace "projected-308" to be "Succeeded or Failed" Jul 20 03:02:53.474: INFO: Pod "downwardapi-volume-33a8ee11-2ce3-44af-b41a-4d45db943168": Phase="Pending", Reason="", readiness=false. Elapsed: 15.844644ms Jul 20 03:02:55.478: INFO: Pod "downwardapi-volume-33a8ee11-2ce3-44af-b41a-4d45db943168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019746542s Jul 20 03:02:57.481: INFO: Pod "downwardapi-volume-33a8ee11-2ce3-44af-b41a-4d45db943168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022758523s STEP: Saw pod success Jul 20 03:02:57.481: INFO: Pod "downwardapi-volume-33a8ee11-2ce3-44af-b41a-4d45db943168" satisfied condition "Succeeded or Failed" Jul 20 03:02:57.483: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-33a8ee11-2ce3-44af-b41a-4d45db943168 container client-container: STEP: delete the pod Jul 20 03:02:57.561: INFO: Waiting for pod downwardapi-volume-33a8ee11-2ce3-44af-b41a-4d45db943168 to disappear Jul 20 03:02:57.576: INFO: Pod downwardapi-volume-33a8ee11-2ce3-44af-b41a-4d45db943168 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:02:57.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-308" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":217,"skipped":3476,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:02:57.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:03:01.985: INFO: Waiting up to 5m0s for pod "client-envvars-ec681e08-563f-47a2-a2d3-64fd12d9f9df" in namespace "pods-5595" to be "Succeeded or Failed" Jul 20 03:03:02.047: INFO: Pod "client-envvars-ec681e08-563f-47a2-a2d3-64fd12d9f9df": Phase="Pending", Reason="", readiness=false. Elapsed: 61.469991ms Jul 20 03:03:04.051: INFO: Pod "client-envvars-ec681e08-563f-47a2-a2d3-64fd12d9f9df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065658586s Jul 20 03:03:06.055: INFO: Pod "client-envvars-ec681e08-563f-47a2-a2d3-64fd12d9f9df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069664509s STEP: Saw pod success Jul 20 03:03:06.055: INFO: Pod "client-envvars-ec681e08-563f-47a2-a2d3-64fd12d9f9df" satisfied condition "Succeeded or Failed" Jul 20 03:03:06.058: INFO: Trying to get logs from node latest-worker2 pod client-envvars-ec681e08-563f-47a2-a2d3-64fd12d9f9df container env3cont: STEP: delete the pod Jul 20 03:03:06.089: INFO: Waiting for pod client-envvars-ec681e08-563f-47a2-a2d3-64fd12d9f9df to disappear Jul 20 03:03:06.097: INFO: Pod client-envvars-ec681e08-563f-47a2-a2d3-64fd12d9f9df no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:03:06.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5595" for this suite. • [SLOW TEST:8.520 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":294,"completed":218,"skipped":3539,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:03:06.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 03:03:10.313: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:03:10.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1439" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":219,"skipped":3545,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:03:10.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Jul 20 03:05:11.027: INFO: Successfully updated pod "var-expansion-c815fd95-0dc6-4876-b238-d227e3e42666" STEP: waiting for pod running STEP: deleting the pod gracefully Jul 20 03:05:15.052: INFO: Deleting pod "var-expansion-c815fd95-0dc6-4876-b238-d227e3e42666" in namespace "var-expansion-4189" Jul 20 03:05:15.057: INFO: Wait up to 5m0s for pod "var-expansion-c815fd95-0dc6-4876-b238-d227e3e42666" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:05:55.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4189" for this suite. • [SLOW TEST:164.721 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":294,"completed":220,"skipped":3563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:05:55.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-2d28f4d4-e327-4152-a57f-b2c25553ef2f in namespace container-probe-1273 Jul 20 03:05:59.208: INFO: Started pod busybox-2d28f4d4-e327-4152-a57f-b2c25553ef2f in namespace container-probe-1273 STEP: checking the pod's current state and verifying that restartCount is present Jul 20 03:05:59.211: INFO: Initial restart count of pod busybox-2d28f4d4-e327-4152-a57f-b2c25553ef2f is 0 Jul 20 03:06:53.797: INFO: Restart count of pod container-probe-1273/busybox-2d28f4d4-e327-4152-a57f-b2c25553ef2f is now 1 (54.586735726s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:06:53.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1273" for this suite. • [SLOW TEST:58.768 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":221,"skipped":3619,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:06:53.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:07:05.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6831" for this suite. • [SLOW TEST:11.699 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":294,"completed":222,"skipped":3629,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:07:05.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Jul 20 03:07:06.931: INFO: Waiting up to 5m0s for pod "var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11" in namespace "var-expansion-3944" to be "Succeeded or Failed" Jul 20 03:07:07.120: INFO: Pod "var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11": Phase="Pending", Reason="", readiness=false. Elapsed: 189.097061ms Jul 20 03:07:09.124: INFO: Pod "var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193426212s Jul 20 03:07:11.237: INFO: Pod "var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30632328s Jul 20 03:07:13.332: INFO: Pod "var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11": Phase="Running", Reason="", readiness=true. Elapsed: 6.401744977s Jul 20 03:07:15.337: INFO: Pod "var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.4060947s STEP: Saw pod success Jul 20 03:07:15.337: INFO: Pod "var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11" satisfied condition "Succeeded or Failed" Jul 20 03:07:15.340: INFO: Trying to get logs from node latest-worker2 pod var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11 container dapi-container: STEP: delete the pod Jul 20 03:07:15.412: INFO: Waiting for pod var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11 to disappear Jul 20 03:07:15.419: INFO: Pod var-expansion-a8bc1058-aa37-41d7-9c0c-c5686a5cbb11 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:07:15.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3944" for this suite. • [SLOW TEST:9.875 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":294,"completed":223,"skipped":3647,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:07:15.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:07:15.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9478" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":294,"completed":224,"skipped":3648,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:07:15.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:07:22.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8580" for this suite. • [SLOW TEST:7.069 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":294,"completed":225,"skipped":3659,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:07:22.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 20 03:07:22.794: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-label-changed fdf38d22-9480-441d-bd77-a4f75d140942 110713 0 2020-07-20 03:07:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-20 03:07:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:07:22.794: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-label-changed fdf38d22-9480-441d-bd77-a4f75d140942 110714 0 2020-07-20 03:07:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-20 03:07:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:07:22.794: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-label-changed fdf38d22-9480-441d-bd77-a4f75d140942 110715 0 2020-07-20 03:07:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-20 03:07:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 20 03:07:32.907: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-label-changed fdf38d22-9480-441d-bd77-a4f75d140942 110752 0 2020-07-20 03:07:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-20 03:07:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:07:32.907: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-label-changed fdf38d22-9480-441d-bd77-a4f75d140942 110753 0 2020-07-20 03:07:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-20 03:07:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:07:32.908: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-label-changed fdf38d22-9480-441d-bd77-a4f75d140942 110754 0 2020-07-20 03:07:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-20 03:07:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:07:32.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2978" for this suite. • [SLOW TEST:10.293 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":294,"completed":226,"skipped":3665,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:07:32.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:09:33.042: INFO: Deleting pod "var-expansion-48410566-49d0-4597-9644-9ef0a1d6fd80" in namespace "var-expansion-1277" Jul 20 03:09:33.046: INFO: Wait up to 5m0s for pod "var-expansion-48410566-49d0-4597-9644-9ef0a1d6fd80" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:09:37.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1277" for this suite. • [SLOW TEST:124.152 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":294,"completed":227,"skipped":3671,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:09:37.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Jul 20 03:09:37.691: INFO: created pod pod-service-account-defaultsa Jul 20 03:09:37.691: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 20 03:09:37.698: INFO: created pod pod-service-account-mountsa Jul 20 03:09:37.698: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 20 03:09:37.721: INFO: created pod pod-service-account-nomountsa Jul 20 03:09:37.721: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 20 03:09:37.802: INFO: created pod pod-service-account-defaultsa-mountspec Jul 20 03:09:37.802: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 20 03:09:37.838: INFO: created pod pod-service-account-mountsa-mountspec Jul 20 03:09:37.838: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 20 03:09:37.884: INFO: created pod pod-service-account-nomountsa-mountspec Jul 20 03:09:37.884: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 20 03:09:37.946: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 20 03:09:37.946: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 20 03:09:37.988: INFO: created pod pod-service-account-mountsa-nomountspec Jul 20 03:09:37.988: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 20 03:09:38.031: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 20 03:09:38.031: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:09:38.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4329" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":294,"completed":228,"skipped":3675,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:09:38.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1328 STEP: creating the pod Jul 20 03:09:38.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1407' Jul 20 03:09:38.643: INFO: stderr: "" Jul 20 03:09:38.643: INFO: stdout: "pod/pause created\n" Jul 20 03:09:38.643: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 20 03:09:38.643: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1407" to be "running and ready" Jul 20 03:09:38.649: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.073089ms Jul 20 03:09:40.652: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008400508s Jul 20 03:09:43.164: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52010963s Jul 20 03:09:45.237: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593235781s Jul 20 03:09:47.821: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.177914999s Jul 20 03:09:50.164: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.520431354s Jul 20 03:09:52.167: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.523894354s Jul 20 03:09:54.172: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 15.528501619s Jul 20 03:09:54.172: INFO: Pod "pause" satisfied condition "running and ready" Jul 20 03:09:54.172: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Jul 20 03:09:54.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1407' Jul 20 03:09:54.526: INFO: stderr: "" Jul 20 03:09:54.526: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 20 03:09:54.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1407' Jul 20 03:09:54.642: INFO: stderr: "" Jul 20 03:09:54.642: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 16s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 20 03:09:54.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1407' Jul 20 03:09:54.746: INFO: stderr: "" Jul 20 03:09:54.746: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 20 03:09:54.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1407' Jul 20 03:09:54.869: INFO: stderr: "" Jul 20 03:09:54.869: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 16s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1335 STEP: using delete to clean up resources Jul 20 03:09:54.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1407' Jul 20 03:09:55.107: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 03:09:55.108: INFO: stdout: "pod \"pause\" force deleted\n" Jul 20 03:09:55.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1407' Jul 20 03:09:55.294: INFO: stderr: "No resources found in kubectl-1407 namespace.\n" Jul 20 03:09:55.295: INFO: stdout: "" Jul 20 03:09:55.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1407 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 20 03:09:55.393: INFO: stderr: "" Jul 20 03:09:55.393: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:09:55.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1407" for this suite. • [SLOW TEST:17.247 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1325 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":294,"completed":229,"skipped":3692,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:09:55.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:10:01.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3711" for this suite. • [SLOW TEST:6.426 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":294,"completed":230,"skipped":3699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:10:01.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jul 20 03:10:02.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4279' Jul 20 03:10:02.713: INFO: stderr: "" Jul 20 03:10:02.713: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jul 20 03:10:03.717: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:10:03.717: INFO: Found 0 / 1 Jul 20 03:10:04.717: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:10:04.717: INFO: Found 0 / 1 Jul 20 03:10:05.717: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:10:05.717: INFO: Found 0 / 1 Jul 20 03:10:06.736: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:10:06.736: INFO: Found 1 / 1 Jul 20 03:10:06.736: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jul 20 03:10:06.740: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:10:06.740: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 20 03:10:06.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config patch pod agnhost-primary-c5bsk --namespace=kubectl-4279 -p {"metadata":{"annotations":{"x":"y"}}}' Jul 20 03:10:06.873: INFO: stderr: "" Jul 20 03:10:06.873: INFO: stdout: "pod/agnhost-primary-c5bsk patched\n" STEP: checking annotations Jul 20 03:10:06.895: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:10:06.895: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:10:06.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4279" for this suite. • [SLOW TEST:5.108 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1485 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":294,"completed":231,"skipped":3754,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:10:06.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:10:07.153: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 20 03:10:10.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6293 create -f -' Jul 20 03:10:14.683: INFO: stderr: "" Jul 20 03:10:14.683: INFO: stdout: "e2e-test-crd-publish-openapi-3038-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 20 03:10:14.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6293 delete e2e-test-crd-publish-openapi-3038-crds test-cr' Jul 20 03:10:14.798: INFO: stderr: "" Jul 20 03:10:14.798: INFO: stdout: "e2e-test-crd-publish-openapi-3038-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jul 20 03:10:14.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6293 apply -f -' Jul 20 03:10:15.120: INFO: stderr: "" Jul 20 03:10:15.120: INFO: stdout: "e2e-test-crd-publish-openapi-3038-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 20 03:10:15.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6293 delete e2e-test-crd-publish-openapi-3038-crds test-cr' Jul 20 03:10:15.223: INFO: stderr: "" Jul 20 03:10:15.223: INFO: stdout: "e2e-test-crd-publish-openapi-3038-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jul 20 03:10:15.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3038-crds' Jul 20 03:10:15.520: INFO: stderr: "" Jul 20 03:10:15.521: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3038-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:10:17.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6293" for this suite. • [SLOW TEST:10.553 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":294,"completed":232,"skipped":3769,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:10:17.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 03:10:18.373: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 03:10:22.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811418, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811418, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811418, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811418, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 03:10:24.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811418, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811418, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811418, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811418, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 03:10:27.395: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jul 20 03:10:27.418: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:10:27.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3844" for this suite. STEP: Destroying namespace "webhook-3844-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.381 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":294,"completed":233,"skipped":3778,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:10:27.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 03:10:29.179: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 03:10:31.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811429, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811429, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811429, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811429, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 03:10:34.256: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:10:34.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6390" for this suite. STEP: Destroying namespace "webhook-6390-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.742 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":294,"completed":234,"skipped":3781,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:10:34.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0720 03:10:36.405453 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 20 03:11:38.422: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:11:38.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-895" for this suite. • [SLOW TEST:63.819 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":294,"completed":235,"skipped":3785,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:11:38.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:11:38.533: INFO: Waiting up to 5m0s for pod "downwardapi-volume-271938cb-d204-4d31-a62d-e20ae515c854" in namespace "projected-4443" to be "Succeeded or Failed" Jul 20 03:11:38.535: INFO: Pod "downwardapi-volume-271938cb-d204-4d31-a62d-e20ae515c854": Phase="Pending", Reason="", readiness=false. Elapsed: 1.859179ms Jul 20 03:11:40.678: INFO: Pod "downwardapi-volume-271938cb-d204-4d31-a62d-e20ae515c854": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144713501s Jul 20 03:11:42.702: INFO: Pod "downwardapi-volume-271938cb-d204-4d31-a62d-e20ae515c854": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169457584s Jul 20 03:11:44.707: INFO: Pod "downwardapi-volume-271938cb-d204-4d31-a62d-e20ae515c854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.173872969s STEP: Saw pod success Jul 20 03:11:44.707: INFO: Pod "downwardapi-volume-271938cb-d204-4d31-a62d-e20ae515c854" satisfied condition "Succeeded or Failed" Jul 20 03:11:44.710: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-271938cb-d204-4d31-a62d-e20ae515c854 container client-container: STEP: delete the pod Jul 20 03:11:45.163: INFO: Waiting for pod downwardapi-volume-271938cb-d204-4d31-a62d-e20ae515c854 to disappear Jul 20 03:11:45.166: INFO: Pod downwardapi-volume-271938cb-d204-4d31-a62d-e20ae515c854 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:11:45.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4443" for this suite. • [SLOW TEST:6.796 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":294,"completed":236,"skipped":3792,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:11:45.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-263/configmap-test-27555127-bd67-44ea-97ed-ad0cdeb20393 STEP: Creating a pod to test consume configMaps Jul 20 03:11:45.403: INFO: Waiting up to 5m0s for pod "pod-configmaps-96ae83c0-133a-413d-90b9-473a97807276" in namespace "configmap-263" to be "Succeeded or Failed" Jul 20 03:11:45.407: INFO: Pod "pod-configmaps-96ae83c0-133a-413d-90b9-473a97807276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257815ms Jul 20 03:11:47.559: INFO: Pod "pod-configmaps-96ae83c0-133a-413d-90b9-473a97807276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156321861s Jul 20 03:11:49.563: INFO: Pod "pod-configmaps-96ae83c0-133a-413d-90b9-473a97807276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160394617s STEP: Saw pod success Jul 20 03:11:49.563: INFO: Pod "pod-configmaps-96ae83c0-133a-413d-90b9-473a97807276" satisfied condition "Succeeded or Failed" Jul 20 03:11:49.566: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-96ae83c0-133a-413d-90b9-473a97807276 container env-test: STEP: delete the pod Jul 20 03:11:49.655: INFO: Waiting for pod pod-configmaps-96ae83c0-133a-413d-90b9-473a97807276 to disappear Jul 20 03:11:49.869: INFO: Pod pod-configmaps-96ae83c0-133a-413d-90b9-473a97807276 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:11:49.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-263" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":237,"skipped":3804,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:11:49.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:11:50.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48e1b073-ec48-4b0d-8fea-fd0bd18bc67d" in namespace "projected-7438" to be "Succeeded or Failed" Jul 20 03:11:50.258: INFO: Pod "downwardapi-volume-48e1b073-ec48-4b0d-8fea-fd0bd18bc67d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.173255ms Jul 20 03:11:52.404: INFO: Pod "downwardapi-volume-48e1b073-ec48-4b0d-8fea-fd0bd18bc67d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189757623s Jul 20 03:11:54.408: INFO: Pod "downwardapi-volume-48e1b073-ec48-4b0d-8fea-fd0bd18bc67d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193913847s Jul 20 03:11:56.413: INFO: Pod "downwardapi-volume-48e1b073-ec48-4b0d-8fea-fd0bd18bc67d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.198433392s STEP: Saw pod success Jul 20 03:11:56.413: INFO: Pod "downwardapi-volume-48e1b073-ec48-4b0d-8fea-fd0bd18bc67d" satisfied condition "Succeeded or Failed" Jul 20 03:11:56.416: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-48e1b073-ec48-4b0d-8fea-fd0bd18bc67d container client-container: STEP: delete the pod Jul 20 03:11:56.435: INFO: Waiting for pod downwardapi-volume-48e1b073-ec48-4b0d-8fea-fd0bd18bc67d to disappear Jul 20 03:11:56.470: INFO: Pod downwardapi-volume-48e1b073-ec48-4b0d-8fea-fd0bd18bc67d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:11:56.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7438" for this suite. • [SLOW TEST:6.570 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":238,"skipped":3809,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:11:56.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 20 03:11:57.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:11:57.105: INFO: Number of nodes with available pods: 0 Jul 20 03:11:57.105: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:11:58.109: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:11:58.111: INFO: Number of nodes with available pods: 0 Jul 20 03:11:58.111: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:11:59.247: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:11:59.252: INFO: Number of nodes with available pods: 0 Jul 20 03:11:59.252: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:12:00.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:12:00.113: INFO: Number of nodes with available pods: 0 Jul 20 03:12:00.113: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:12:01.115: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:12:01.343: INFO: Number of nodes with available pods: 0 Jul 20 03:12:01.343: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:12:02.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:12:02.763: INFO: Number of nodes with available pods: 0 Jul 20 03:12:02.763: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:12:03.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:12:03.114: INFO: Number of nodes with available pods: 0 Jul 20 03:12:03.114: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:12:04.145: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:12:04.148: INFO: Number of nodes with available pods: 0 Jul 20 03:12:04.148: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:12:05.170: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:12:05.229: INFO: Number of nodes with available pods: 1 Jul 20 03:12:05.229: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:12:06.296: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:12:06.299: INFO: Number of nodes with available pods: 1 Jul 20 03:12:06.299: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:12:07.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:12:07.114: INFO: Number of nodes with available pods: 2 Jul 20 03:12:07.114: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 20 03:12:07.188: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:12:07.278: INFO: Number of nodes with available pods: 2 Jul 20 03:12:07.278: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1167, will wait for the garbage collector to delete the pods Jul 20 03:12:08.433: INFO: Deleting DaemonSet.extensions daemon-set took: 6.962012ms Jul 20 03:12:08.933: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.223434ms Jul 20 03:12:12.237: INFO: Number of nodes with available pods: 0 Jul 20 03:12:12.237: INFO: Number of running nodes: 0, number of available pods: 0 Jul 20 03:12:12.238: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1167/daemonsets","resourceVersion":"112109"},"items":null} Jul 20 03:12:12.241: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1167/pods","resourceVersion":"112109"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:12:12.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1167" for this suite. • [SLOW TEST:15.766 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":294,"completed":239,"skipped":3815,"failed":0} SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:12:12.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:12:12.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9167" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":294,"completed":240,"skipped":3820,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:12:12.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:12:12.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8510" for this suite. STEP: Destroying namespace "nspatchtest-8a5f7fc3-d13d-494c-81bb-91d2781f3adf-5787" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":294,"completed":241,"skipped":3830,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:12:12.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-fzj2 STEP: Creating a pod to test atomic-volume-subpath Jul 20 03:12:12.763: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fzj2" in namespace "subpath-2067" to be "Succeeded or Failed" Jul 20 03:12:12.785: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.998783ms Jul 20 03:12:14.788: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025037372s Jul 20 03:12:16.792: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 4.029122919s Jul 20 03:12:18.797: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 6.03363973s Jul 20 03:12:20.801: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 8.037787578s Jul 20 03:12:22.810: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 10.046304651s Jul 20 03:12:24.814: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 12.050160682s Jul 20 03:12:26.818: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 14.054253454s Jul 20 03:12:28.821: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 16.058150413s Jul 20 03:12:30.826: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 18.062733391s Jul 20 03:12:32.829: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 20.065649365s Jul 20 03:12:34.833: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 22.069924072s Jul 20 03:12:36.837: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Running", Reason="", readiness=true. Elapsed: 24.07393967s Jul 20 03:12:38.842: INFO: Pod "pod-subpath-test-projected-fzj2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.078561677s STEP: Saw pod success Jul 20 03:12:38.842: INFO: Pod "pod-subpath-test-projected-fzj2" satisfied condition "Succeeded or Failed" Jul 20 03:12:38.845: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-fzj2 container test-container-subpath-projected-fzj2: STEP: delete the pod Jul 20 03:12:38.880: INFO: Waiting for pod pod-subpath-test-projected-fzj2 to disappear Jul 20 03:12:38.905: INFO: Pod pod-subpath-test-projected-fzj2 no longer exists STEP: Deleting pod pod-subpath-test-projected-fzj2 Jul 20 03:12:38.905: INFO: Deleting pod "pod-subpath-test-projected-fzj2" in namespace "subpath-2067" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:12:38.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2067" for this suite. • [SLOW TEST:26.343 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":294,"completed":242,"skipped":3830,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:12:38.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 20 03:12:38.978: INFO: Waiting up to 5m0s for pod "pod-47f31380-3576-48ff-9ea6-23674ffc496a" in namespace "emptydir-1146" to be "Succeeded or Failed" Jul 20 03:12:38.992: INFO: Pod "pod-47f31380-3576-48ff-9ea6-23674ffc496a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.281725ms Jul 20 03:12:40.998: INFO: Pod "pod-47f31380-3576-48ff-9ea6-23674ffc496a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020887845s Jul 20 03:12:43.003: INFO: Pod "pod-47f31380-3576-48ff-9ea6-23674ffc496a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025032374s STEP: Saw pod success Jul 20 03:12:43.003: INFO: Pod "pod-47f31380-3576-48ff-9ea6-23674ffc496a" satisfied condition "Succeeded or Failed" Jul 20 03:12:43.005: INFO: Trying to get logs from node latest-worker2 pod pod-47f31380-3576-48ff-9ea6-23674ffc496a container test-container: STEP: delete the pod Jul 20 03:12:43.071: INFO: Waiting for pod pod-47f31380-3576-48ff-9ea6-23674ffc496a to disappear Jul 20 03:12:43.199: INFO: Pod pod-47f31380-3576-48ff-9ea6-23674ffc496a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:12:43.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1146" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":243,"skipped":3836,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:12:43.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Jul 20 03:12:43.549: INFO: Waiting up to 5m0s for pod "client-containers-9539198c-bf0e-46d8-a290-71c09de2ec13" in namespace "containers-5805" to be "Succeeded or Failed" Jul 20 03:12:43.594: INFO: Pod "client-containers-9539198c-bf0e-46d8-a290-71c09de2ec13": Phase="Pending", Reason="", readiness=false. Elapsed: 45.479086ms Jul 20 03:12:45.599: INFO: Pod "client-containers-9539198c-bf0e-46d8-a290-71c09de2ec13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049576842s Jul 20 03:12:47.603: INFO: Pod "client-containers-9539198c-bf0e-46d8-a290-71c09de2ec13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054329519s STEP: Saw pod success Jul 20 03:12:47.603: INFO: Pod "client-containers-9539198c-bf0e-46d8-a290-71c09de2ec13" satisfied condition "Succeeded or Failed" Jul 20 03:12:47.607: INFO: Trying to get logs from node latest-worker2 pod client-containers-9539198c-bf0e-46d8-a290-71c09de2ec13 container test-container: STEP: delete the pod Jul 20 03:12:47.625: INFO: Waiting for pod client-containers-9539198c-bf0e-46d8-a290-71c09de2ec13 to disappear Jul 20 03:12:47.630: INFO: Pod client-containers-9539198c-bf0e-46d8-a290-71c09de2ec13 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:12:47.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5805" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":294,"completed":244,"skipped":3841,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:12:47.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 20 03:12:47.760: INFO: Waiting up to 5m0s for pod "pod-891bc0d3-e114-4861-8585-c245788fa80a" in namespace "emptydir-4541" to be "Succeeded or Failed" Jul 20 03:12:47.762: INFO: Pod "pod-891bc0d3-e114-4861-8585-c245788fa80a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080799ms Jul 20 03:12:49.766: INFO: Pod "pod-891bc0d3-e114-4861-8585-c245788fa80a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006359397s Jul 20 03:12:51.770: INFO: Pod "pod-891bc0d3-e114-4861-8585-c245788fa80a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010540291s STEP: Saw pod success Jul 20 03:12:51.770: INFO: Pod "pod-891bc0d3-e114-4861-8585-c245788fa80a" satisfied condition "Succeeded or Failed" Jul 20 03:12:51.773: INFO: Trying to get logs from node latest-worker2 pod pod-891bc0d3-e114-4861-8585-c245788fa80a container test-container: STEP: delete the pod Jul 20 03:12:51.813: INFO: Waiting for pod pod-891bc0d3-e114-4861-8585-c245788fa80a to disappear Jul 20 03:12:51.824: INFO: Pod pod-891bc0d3-e114-4861-8585-c245788fa80a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:12:51.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4541" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":245,"skipped":3841,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:12:51.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:12:51.917: INFO: Waiting up to 5m0s for pod "downwardapi-volume-074bb927-7d1d-42f9-bce8-9452e0377237" in namespace "downward-api-9773" to be "Succeeded or Failed" Jul 20 03:12:51.948: INFO: Pod "downwardapi-volume-074bb927-7d1d-42f9-bce8-9452e0377237": Phase="Pending", Reason="", readiness=false. Elapsed: 30.523771ms Jul 20 03:12:53.952: INFO: Pod "downwardapi-volume-074bb927-7d1d-42f9-bce8-9452e0377237": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035018148s Jul 20 03:12:55.957: INFO: Pod "downwardapi-volume-074bb927-7d1d-42f9-bce8-9452e0377237": Phase="Running", Reason="", readiness=true. Elapsed: 4.039517867s Jul 20 03:12:57.961: INFO: Pod "downwardapi-volume-074bb927-7d1d-42f9-bce8-9452e0377237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043854761s STEP: Saw pod success Jul 20 03:12:57.961: INFO: Pod "downwardapi-volume-074bb927-7d1d-42f9-bce8-9452e0377237" satisfied condition "Succeeded or Failed" Jul 20 03:12:57.964: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-074bb927-7d1d-42f9-bce8-9452e0377237 container client-container: STEP: delete the pod Jul 20 03:12:58.027: INFO: Waiting for pod downwardapi-volume-074bb927-7d1d-42f9-bce8-9452e0377237 to disappear Jul 20 03:12:58.038: INFO: Pod downwardapi-volume-074bb927-7d1d-42f9-bce8-9452e0377237 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:12:58.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9773" for this suite. • [SLOW TEST:6.214 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":246,"skipped":3844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:12:58.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 03:12:59.032: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 03:13:01.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811579, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811579, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811579, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811579, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 03:13:04.074: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:13:04.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4393" for this suite. STEP: Destroying namespace "webhook-4393-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.671 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":294,"completed":247,"skipped":3877,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:13:04.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:13:04.755: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:13:08.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3521" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":294,"completed":248,"skipped":3889,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:13:08.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-af233c87-ae4b-4a88-b5dc-a54074661b25 STEP: Creating a pod to test consume secrets Jul 20 03:13:09.003: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2995461-88cd-48b3-8dfb-2d44d6134024" in namespace "projected-2936" to be "Succeeded or Failed" Jul 20 03:13:09.023: INFO: Pod "pod-projected-secrets-e2995461-88cd-48b3-8dfb-2d44d6134024": Phase="Pending", Reason="", readiness=false. Elapsed: 19.993951ms Jul 20 03:13:11.038: INFO: Pod "pod-projected-secrets-e2995461-88cd-48b3-8dfb-2d44d6134024": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034687964s Jul 20 03:13:13.042: INFO: Pod "pod-projected-secrets-e2995461-88cd-48b3-8dfb-2d44d6134024": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038981494s STEP: Saw pod success Jul 20 03:13:13.042: INFO: Pod "pod-projected-secrets-e2995461-88cd-48b3-8dfb-2d44d6134024" satisfied condition "Succeeded or Failed" Jul 20 03:13:13.045: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-e2995461-88cd-48b3-8dfb-2d44d6134024 container projected-secret-volume-test: STEP: delete the pod Jul 20 03:13:13.077: INFO: Waiting for pod pod-projected-secrets-e2995461-88cd-48b3-8dfb-2d44d6134024 to disappear Jul 20 03:13:13.094: INFO: Pod pod-projected-secrets-e2995461-88cd-48b3-8dfb-2d44d6134024 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:13:13.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2936" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":249,"skipped":3899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:13:13.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:13:13.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2529' Jul 20 03:13:13.514: INFO: stderr: "" Jul 20 03:13:13.514: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jul 20 03:13:13.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2529' Jul 20 03:13:13.855: INFO: stderr: "" Jul 20 03:13:13.855: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jul 20 03:13:14.859: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:13:14.859: INFO: Found 0 / 1 Jul 20 03:13:15.859: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:13:15.859: INFO: Found 0 / 1 Jul 20 03:13:16.859: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:13:16.859: INFO: Found 1 / 1 Jul 20 03:13:16.859: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 20 03:13:16.862: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 03:13:16.862: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 20 03:13:16.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe pod agnhost-primary-q7pds --namespace=kubectl-2529' Jul 20 03:13:16.983: INFO: stderr: "" Jul 20 03:13:16.983: INFO: stdout: "Name: agnhost-primary-q7pds\nNamespace: kubectl-2529\nPriority: 0\nNode: latest-worker2/172.18.0.12\nStart Time: Mon, 20 Jul 2020 03:13:13 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.119\nIPs:\n IP: 10.244.2.119\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://9fb30eb10a69ed0a9fcc46b29b58a72acec5576ddec870429023f42a58cd2c32\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 20 Jul 2020 03:13:15 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dfmzl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dfmzl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dfmzl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-2529/agnhost-primary-q7pds to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 1s kubelet, latest-worker2 Created container agnhost-primary\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-primary\n" Jul 20 03:13:16.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-2529' Jul 20 03:13:17.099: INFO: stderr: "" Jul 20 03:13:17.099: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2529\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-q7pds\n" Jul 20 03:13:17.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-2529' Jul 20 03:13:17.210: INFO: stderr: "" Jul 20 03:13:17.210: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2529\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.107.150.32\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.119:6379\nSession Affinity: None\nEvents: \n" Jul 20 03:13:17.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe node latest-control-plane' Jul 20 03:13:17.353: INFO: stderr: "" Jul 20 03:13:17.353: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 19 Jul 2020 21:38:12 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 20 Jul 2020 03:13:12 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 20 Jul 2020 03:09:56 +0000 Sun, 19 Jul 2020 21:38:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 20 Jul 2020 03:09:56 +0000 Sun, 19 Jul 2020 21:38:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 20 Jul 2020 03:09:56 +0000 Sun, 19 Jul 2020 21:38:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 20 Jul 2020 03:09:56 +0000 Sun, 19 Jul 2020 21:39:43 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: e756079c6ff042fb9f9f1838b420a0a5\n System UUID: 397b219b-882b-4fb6-87c8-e536d116b866\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.19.0-rc.1\n Kube-Proxy Version: v1.19.0-rc.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5h34m\n kube-system kindnet-mg7cm 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5h34m\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5h34m\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5h34m\n kube-system kube-proxy-gb68f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5h34m\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5h34m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jul 20 03:13:17.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe namespace kubectl-2529' Jul 20 03:13:17.453: INFO: stderr: "" Jul 20 03:13:17.453: INFO: stdout: "Name: kubectl-2529\nLabels: e2e-framework=kubectl\n e2e-run=d50aa47d-1e93-455b-a070-ce7baf916b94\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:13:17.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2529" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":294,"completed":250,"skipped":3929,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:13:17.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-284 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-284 I0720 03:13:17.653983 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-284, replica count: 2 I0720 03:13:20.704348 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 03:13:23.704527 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 03:13:23.704: INFO: Creating new exec pod Jul 20 03:13:28.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-284 execpodkhtkx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jul 20 03:13:29.041: INFO: stderr: "I0720 03:13:28.923075 3252 log.go:181] (0xc0007dfa20) (0xc000967ae0) Create stream\nI0720 03:13:28.923155 3252 log.go:181] (0xc0007dfa20) (0xc000967ae0) Stream added, broadcasting: 1\nI0720 03:13:28.926249 3252 log.go:181] (0xc0007dfa20) Reply frame received for 1\nI0720 03:13:28.926290 3252 log.go:181] (0xc0007dfa20) (0xc000967b80) Create stream\nI0720 03:13:28.926303 3252 log.go:181] (0xc0007dfa20) (0xc000967b80) Stream added, broadcasting: 3\nI0720 03:13:28.927396 3252 log.go:181] (0xc0007dfa20) Reply frame received for 3\nI0720 03:13:28.927440 3252 log.go:181] (0xc0007dfa20) (0xc00079ebe0) Create stream\nI0720 03:13:28.927460 3252 log.go:181] (0xc0007dfa20) (0xc00079ebe0) Stream added, broadcasting: 5\nI0720 03:13:28.929163 3252 log.go:181] (0xc0007dfa20) Reply frame received for 5\nI0720 03:13:29.032214 3252 log.go:181] (0xc0007dfa20) Data frame received for 5\nI0720 03:13:29.032247 3252 log.go:181] (0xc00079ebe0) (5) Data frame handling\nI0720 03:13:29.032274 3252 log.go:181] (0xc00079ebe0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0720 03:13:29.032969 3252 log.go:181] (0xc0007dfa20) Data frame received for 5\nI0720 03:13:29.032982 3252 log.go:181] (0xc00079ebe0) (5) Data frame handling\nI0720 03:13:29.032988 3252 log.go:181] (0xc00079ebe0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0720 03:13:29.033472 3252 log.go:181] (0xc0007dfa20) Data frame received for 5\nI0720 03:13:29.033505 3252 log.go:181] (0xc00079ebe0) (5) Data frame handling\nI0720 03:13:29.033540 3252 log.go:181] (0xc0007dfa20) Data frame received for 3\nI0720 03:13:29.033559 3252 log.go:181] (0xc000967b80) (3) Data frame handling\nI0720 03:13:29.035892 3252 log.go:181] (0xc0007dfa20) Data frame received for 1\nI0720 03:13:29.035916 3252 log.go:181] (0xc000967ae0) (1) Data frame handling\nI0720 03:13:29.035941 3252 log.go:181] (0xc000967ae0) (1) Data frame sent\nI0720 03:13:29.035958 3252 log.go:181] (0xc0007dfa20) (0xc000967ae0) Stream removed, broadcasting: 1\nI0720 03:13:29.035969 3252 log.go:181] (0xc0007dfa20) Go away received\nI0720 03:13:29.036440 3252 log.go:181] (0xc0007dfa20) (0xc000967ae0) Stream removed, broadcasting: 1\nI0720 03:13:29.036464 3252 log.go:181] (0xc0007dfa20) (0xc000967b80) Stream removed, broadcasting: 3\nI0720 03:13:29.036480 3252 log.go:181] (0xc0007dfa20) (0xc00079ebe0) Stream removed, broadcasting: 5\n" Jul 20 03:13:29.041: INFO: stdout: "" Jul 20 03:13:29.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-284 execpodkhtkx -- /bin/sh -x -c nc -zv -t -w 2 10.100.210.166 80' Jul 20 03:13:29.237: INFO: stderr: "I0720 03:13:29.163285 3270 log.go:181] (0xc000eb8f20) (0xc000c1f720) Create stream\nI0720 03:13:29.163338 3270 log.go:181] (0xc000eb8f20) (0xc000c1f720) Stream added, broadcasting: 1\nI0720 03:13:29.171268 3270 log.go:181] (0xc000eb8f20) Reply frame received for 1\nI0720 03:13:29.171335 3270 log.go:181] (0xc000eb8f20) (0xc000bd2500) Create stream\nI0720 03:13:29.171361 3270 log.go:181] (0xc000eb8f20) (0xc000bd2500) Stream added, broadcasting: 3\nI0720 03:13:29.172350 3270 log.go:181] (0xc000eb8f20) Reply frame received for 3\nI0720 03:13:29.172409 3270 log.go:181] (0xc000eb8f20) (0xc000376780) Create stream\nI0720 03:13:29.172446 3270 log.go:181] (0xc000eb8f20) (0xc000376780) Stream added, broadcasting: 5\nI0720 03:13:29.173464 3270 log.go:181] (0xc000eb8f20) Reply frame received for 5\nI0720 03:13:29.230082 3270 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0720 03:13:29.230105 3270 log.go:181] (0xc000bd2500) (3) Data frame handling\nI0720 03:13:29.230317 3270 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0720 03:13:29.230342 3270 log.go:181] (0xc000376780) (5) Data frame handling\nI0720 03:13:29.230355 3270 log.go:181] (0xc000376780) (5) Data frame sent\n+ nc -zv -t -w 2 10.100.210.166 80\nConnection to 10.100.210.166 80 port [tcp/http] succeeded!\nI0720 03:13:29.230374 3270 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0720 03:13:29.230387 3270 log.go:181] (0xc000376780) (5) Data frame handling\nI0720 03:13:29.231973 3270 log.go:181] (0xc000eb8f20) Data frame received for 1\nI0720 03:13:29.232006 3270 log.go:181] (0xc000c1f720) (1) Data frame handling\nI0720 03:13:29.232040 3270 log.go:181] (0xc000c1f720) (1) Data frame sent\nI0720 03:13:29.232170 3270 log.go:181] (0xc000eb8f20) (0xc000c1f720) Stream removed, broadcasting: 1\nI0720 03:13:29.232220 3270 log.go:181] (0xc000eb8f20) Go away received\nI0720 03:13:29.232476 3270 log.go:181] (0xc000eb8f20) (0xc000c1f720) Stream removed, broadcasting: 1\nI0720 03:13:29.232489 3270 log.go:181] (0xc000eb8f20) (0xc000bd2500) Stream removed, broadcasting: 3\nI0720 03:13:29.232495 3270 log.go:181] (0xc000eb8f20) (0xc000376780) Stream removed, broadcasting: 5\n" Jul 20 03:13:29.237: INFO: stdout: "" Jul 20 03:13:29.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-284 execpodkhtkx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32184' Jul 20 03:13:29.463: INFO: stderr: "I0720 03:13:29.377897 3288 log.go:181] (0xc000eb6bb0) (0xc0009c1720) Create stream\nI0720 03:13:29.377951 3288 log.go:181] (0xc000eb6bb0) (0xc0009c1720) Stream added, broadcasting: 1\nI0720 03:13:29.383043 3288 log.go:181] (0xc000eb6bb0) Reply frame received for 1\nI0720 03:13:29.383095 3288 log.go:181] (0xc000eb6bb0) (0xc00095f0e0) Create stream\nI0720 03:13:29.383112 3288 log.go:181] (0xc000eb6bb0) (0xc00095f0e0) Stream added, broadcasting: 3\nI0720 03:13:29.383988 3288 log.go:181] (0xc000eb6bb0) Reply frame received for 3\nI0720 03:13:29.384021 3288 log.go:181] (0xc000eb6bb0) (0xc0009000a0) Create stream\nI0720 03:13:29.384030 3288 log.go:181] (0xc000eb6bb0) (0xc0009000a0) Stream added, broadcasting: 5\nI0720 03:13:29.385101 3288 log.go:181] (0xc000eb6bb0) Reply frame received for 5\nI0720 03:13:29.455047 3288 log.go:181] (0xc000eb6bb0) Data frame received for 5\nI0720 03:13:29.455074 3288 log.go:181] (0xc0009000a0) (5) Data frame handling\nI0720 03:13:29.455093 3288 log.go:181] (0xc0009000a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 32184\nI0720 03:13:29.455159 3288 log.go:181] (0xc000eb6bb0) Data frame received for 5\nI0720 03:13:29.455179 3288 log.go:181] (0xc0009000a0) (5) Data frame handling\nI0720 03:13:29.455200 3288 log.go:181] (0xc0009000a0) (5) Data frame sent\nConnection to 172.18.0.14 32184 port [tcp/32184] succeeded!\nI0720 03:13:29.455524 3288 log.go:181] (0xc000eb6bb0) Data frame received for 3\nI0720 03:13:29.455554 3288 log.go:181] (0xc00095f0e0) (3) Data frame handling\nI0720 03:13:29.455683 3288 log.go:181] (0xc000eb6bb0) Data frame received for 5\nI0720 03:13:29.455708 3288 log.go:181] (0xc0009000a0) (5) Data frame handling\nI0720 03:13:29.457588 3288 log.go:181] (0xc000eb6bb0) Data frame received for 1\nI0720 03:13:29.457606 3288 log.go:181] (0xc0009c1720) (1) Data frame handling\nI0720 03:13:29.457623 3288 log.go:181] (0xc0009c1720) (1) Data frame sent\nI0720 03:13:29.457641 3288 log.go:181] (0xc000eb6bb0) (0xc0009c1720) Stream removed, broadcasting: 1\nI0720 03:13:29.457721 3288 log.go:181] (0xc000eb6bb0) Go away received\nI0720 03:13:29.457999 3288 log.go:181] (0xc000eb6bb0) (0xc0009c1720) Stream removed, broadcasting: 1\nI0720 03:13:29.458013 3288 log.go:181] (0xc000eb6bb0) (0xc00095f0e0) Stream removed, broadcasting: 3\nI0720 03:13:29.458019 3288 log.go:181] (0xc000eb6bb0) (0xc0009000a0) Stream removed, broadcasting: 5\n" Jul 20 03:13:29.463: INFO: stdout: "" Jul 20 03:13:29.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-284 execpodkhtkx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32184' Jul 20 03:13:29.677: INFO: stderr: "I0720 03:13:29.592557 3306 log.go:181] (0xc000d21290) (0xc0008a2a00) Create stream\nI0720 03:13:29.592613 3306 log.go:181] (0xc000d21290) (0xc0008a2a00) Stream added, broadcasting: 1\nI0720 03:13:29.601504 3306 log.go:181] (0xc000d21290) Reply frame received for 1\nI0720 03:13:29.602042 3306 log.go:181] (0xc000d21290) (0xc000848640) Create stream\nI0720 03:13:29.602111 3306 log.go:181] (0xc000d21290) (0xc000848640) Stream added, broadcasting: 3\nI0720 03:13:29.605798 3306 log.go:181] (0xc000d21290) Reply frame received for 3\nI0720 03:13:29.605825 3306 log.go:181] (0xc000d21290) (0xc0001923c0) Create stream\nI0720 03:13:29.605835 3306 log.go:181] (0xc000d21290) (0xc0001923c0) Stream added, broadcasting: 5\nI0720 03:13:29.608470 3306 log.go:181] (0xc000d21290) Reply frame received for 5\nI0720 03:13:29.669483 3306 log.go:181] (0xc000d21290) Data frame received for 5\nI0720 03:13:29.669527 3306 log.go:181] (0xc0001923c0) (5) Data frame handling\nI0720 03:13:29.669541 3306 log.go:181] (0xc0001923c0) (5) Data frame sent\nI0720 03:13:29.669551 3306 log.go:181] (0xc000d21290) Data frame received for 5\nI0720 03:13:29.669559 3306 log.go:181] (0xc0001923c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32184\nConnection to 172.18.0.12 32184 port [tcp/32184] succeeded!\nI0720 03:13:29.669581 3306 log.go:181] (0xc000d21290) Data frame received for 3\nI0720 03:13:29.669604 3306 log.go:181] (0xc000848640) (3) Data frame handling\nI0720 03:13:29.671061 3306 log.go:181] (0xc000d21290) Data frame received for 1\nI0720 03:13:29.671097 3306 log.go:181] (0xc0008a2a00) (1) Data frame handling\nI0720 03:13:29.671119 3306 log.go:181] (0xc0008a2a00) (1) Data frame sent\nI0720 03:13:29.671147 3306 log.go:181] (0xc000d21290) (0xc0008a2a00) Stream removed, broadcasting: 1\nI0720 03:13:29.671562 3306 log.go:181] (0xc000d21290) (0xc0008a2a00) Stream removed, broadcasting: 1\nI0720 03:13:29.671584 3306 log.go:181] (0xc000d21290) (0xc000848640) Stream removed, broadcasting: 3\nI0720 03:13:29.671596 3306 log.go:181] (0xc000d21290) (0xc0001923c0) Stream removed, broadcasting: 5\n" Jul 20 03:13:29.677: INFO: stdout: "" Jul 20 03:13:29.677: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:13:29.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-284" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:12.305 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":294,"completed":251,"skipped":3975,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:13:29.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-9689 STEP: creating replication controller nodeport-test in namespace services-9689 I0720 03:13:29.858401 8 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9689, replica count: 2 I0720 03:13:32.908910 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 03:13:35.909090 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 03:13:35.909: INFO: Creating new exec pod Jul 20 03:13:40.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9689 execpodqgh6n -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jul 20 03:13:41.216: INFO: stderr: "I0720 03:13:41.110801 3324 log.go:181] (0xc000d13080) (0xc0005b0960) Create stream\nI0720 03:13:41.110858 3324 log.go:181] (0xc000d13080) (0xc0005b0960) Stream added, broadcasting: 1\nI0720 03:13:41.116053 3324 log.go:181] (0xc000d13080) Reply frame received for 1\nI0720 03:13:41.116105 3324 log.go:181] (0xc000d13080) (0xc000344140) Create stream\nI0720 03:13:41.116121 3324 log.go:181] (0xc000d13080) (0xc000344140) Stream added, broadcasting: 3\nI0720 03:13:41.117083 3324 log.go:181] (0xc000d13080) Reply frame received for 3\nI0720 03:13:41.117120 3324 log.go:181] (0xc000d13080) (0xc000344780) Create stream\nI0720 03:13:41.117132 3324 log.go:181] (0xc000d13080) (0xc000344780) Stream added, broadcasting: 5\nI0720 03:13:41.117938 3324 log.go:181] (0xc000d13080) Reply frame received for 5\nI0720 03:13:41.208144 3324 log.go:181] (0xc000d13080) Data frame received for 3\nI0720 03:13:41.208183 3324 log.go:181] (0xc000344140) (3) Data frame handling\nI0720 03:13:41.208217 3324 log.go:181] (0xc000d13080) Data frame received for 5\nI0720 03:13:41.208230 3324 log.go:181] (0xc000344780) (5) Data frame handling\nI0720 03:13:41.208243 3324 log.go:181] (0xc000344780) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0720 03:13:41.208489 3324 log.go:181] (0xc000d13080) Data frame received for 5\nI0720 03:13:41.208509 3324 log.go:181] (0xc000344780) (5) Data frame handling\nI0720 03:13:41.210473 3324 log.go:181] (0xc000d13080) Data frame received for 1\nI0720 03:13:41.210494 3324 log.go:181] (0xc0005b0960) (1) Data frame handling\nI0720 03:13:41.210544 3324 log.go:181] (0xc0005b0960) (1) Data frame sent\nI0720 03:13:41.210596 3324 log.go:181] (0xc000d13080) (0xc0005b0960) Stream removed, broadcasting: 1\nI0720 03:13:41.210926 3324 log.go:181] (0xc000d13080) Go away received\nI0720 03:13:41.211052 3324 log.go:181] (0xc000d13080) (0xc0005b0960) Stream removed, broadcasting: 1\nI0720 03:13:41.211077 3324 log.go:181] (0xc000d13080) (0xc000344140) Stream removed, broadcasting: 3\nI0720 03:13:41.211092 3324 log.go:181] (0xc000d13080) (0xc000344780) Stream removed, broadcasting: 5\n" Jul 20 03:13:41.216: INFO: stdout: "" Jul 20 03:13:41.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9689 execpodqgh6n -- /bin/sh -x -c nc -zv -t -w 2 10.98.152.11 80' Jul 20 03:13:41.446: INFO: stderr: "I0720 03:13:41.358432 3343 log.go:181] (0xc000dc8d10) (0xc000d9a5a0) Create stream\nI0720 03:13:41.358531 3343 log.go:181] (0xc000dc8d10) (0xc000d9a5a0) Stream added, broadcasting: 1\nI0720 03:13:41.367047 3343 log.go:181] (0xc000dc8d10) Reply frame received for 1\nI0720 03:13:41.367075 3343 log.go:181] (0xc000dc8d10) (0xc000abf2c0) Create stream\nI0720 03:13:41.367081 3343 log.go:181] (0xc000dc8d10) (0xc000abf2c0) Stream added, broadcasting: 3\nI0720 03:13:41.367946 3343 log.go:181] (0xc000dc8d10) Reply frame received for 3\nI0720 03:13:41.367986 3343 log.go:181] (0xc000dc8d10) (0xc0005d4c80) Create stream\nI0720 03:13:41.368000 3343 log.go:181] (0xc000dc8d10) (0xc0005d4c80) Stream added, broadcasting: 5\nI0720 03:13:41.369006 3343 log.go:181] (0xc000dc8d10) Reply frame received for 5\nI0720 03:13:41.437620 3343 log.go:181] (0xc000dc8d10) Data frame received for 5\nI0720 03:13:41.437664 3343 log.go:181] (0xc0005d4c80) (5) Data frame handling\nI0720 03:13:41.437689 3343 log.go:181] (0xc0005d4c80) (5) Data frame sent\nI0720 03:13:41.437713 3343 log.go:181] (0xc000dc8d10) Data frame received for 5\nI0720 03:13:41.437738 3343 log.go:181] (0xc0005d4c80) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.152.11 80\nConnection to 10.98.152.11 80 port [tcp/http] succeeded!\nI0720 03:13:41.437773 3343 log.go:181] (0xc000dc8d10) Data frame received for 3\nI0720 03:13:41.437796 3343 log.go:181] (0xc000abf2c0) (3) Data frame handling\nI0720 03:13:41.440341 3343 log.go:181] (0xc000dc8d10) Data frame received for 1\nI0720 03:13:41.440404 3343 log.go:181] (0xc000d9a5a0) (1) Data frame handling\nI0720 03:13:41.440424 3343 log.go:181] (0xc000d9a5a0) (1) Data frame sent\nI0720 03:13:41.440442 3343 log.go:181] (0xc000dc8d10) (0xc000d9a5a0) Stream removed, broadcasting: 1\nI0720 03:13:41.440460 3343 log.go:181] (0xc000dc8d10) Go away received\nI0720 03:13:41.440960 3343 log.go:181] (0xc000dc8d10) (0xc000d9a5a0) Stream removed, broadcasting: 1\nI0720 03:13:41.440987 3343 log.go:181] (0xc000dc8d10) (0xc000abf2c0) Stream removed, broadcasting: 3\nI0720 03:13:41.440999 3343 log.go:181] (0xc000dc8d10) (0xc0005d4c80) Stream removed, broadcasting: 5\n" Jul 20 03:13:41.446: INFO: stdout: "" Jul 20 03:13:41.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9689 execpodqgh6n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31656' Jul 20 03:13:41.631: INFO: stderr: "I0720 03:13:41.569351 3361 log.go:181] (0xc000be20b0) (0xc000beb040) Create stream\nI0720 03:13:41.569405 3361 log.go:181] (0xc000be20b0) (0xc000beb040) Stream added, broadcasting: 1\nI0720 03:13:41.575341 3361 log.go:181] (0xc000be20b0) Reply frame received for 1\nI0720 03:13:41.575385 3361 log.go:181] (0xc000be20b0) (0xc000be6320) Create stream\nI0720 03:13:41.575396 3361 log.go:181] (0xc000be20b0) (0xc000be6320) Stream added, broadcasting: 3\nI0720 03:13:41.576218 3361 log.go:181] (0xc000be20b0) Reply frame received for 3\nI0720 03:13:41.576253 3361 log.go:181] (0xc000be20b0) (0xc000be6c80) Create stream\nI0720 03:13:41.576265 3361 log.go:181] (0xc000be20b0) (0xc000be6c80) Stream added, broadcasting: 5\nI0720 03:13:41.577189 3361 log.go:181] (0xc000be20b0) Reply frame received for 5\nI0720 03:13:41.623964 3361 log.go:181] (0xc000be20b0) Data frame received for 5\nI0720 03:13:41.624030 3361 log.go:181] (0xc000be6c80) (5) Data frame handling\nI0720 03:13:41.624059 3361 log.go:181] (0xc000be6c80) (5) Data frame sent\nI0720 03:13:41.624077 3361 log.go:181] (0xc000be20b0) Data frame received for 5\nI0720 03:13:41.624095 3361 log.go:181] (0xc000be6c80) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31656\nConnection to 172.18.0.14 31656 port [tcp/31656] succeeded!\nI0720 03:13:41.624145 3361 log.go:181] (0xc000be20b0) Data frame received for 3\nI0720 03:13:41.624195 3361 log.go:181] (0xc000be6320) (3) Data frame handling\nI0720 03:13:41.626003 3361 log.go:181] (0xc000be20b0) Data frame received for 1\nI0720 03:13:41.626017 3361 log.go:181] (0xc000beb040) (1) Data frame handling\nI0720 03:13:41.626024 3361 log.go:181] (0xc000beb040) (1) Data frame sent\nI0720 03:13:41.626033 3361 log.go:181] (0xc000be20b0) (0xc000beb040) Stream removed, broadcasting: 1\nI0720 03:13:41.626042 3361 log.go:181] (0xc000be20b0) Go away received\nI0720 03:13:41.626548 3361 log.go:181] (0xc000be20b0) (0xc000beb040) Stream removed, broadcasting: 1\nI0720 03:13:41.626572 3361 log.go:181] (0xc000be20b0) (0xc000be6320) Stream removed, broadcasting: 3\nI0720 03:13:41.626582 3361 log.go:181] (0xc000be20b0) (0xc000be6c80) Stream removed, broadcasting: 5\n" Jul 20 03:13:41.631: INFO: stdout: "" Jul 20 03:13:41.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9689 execpodqgh6n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31656' Jul 20 03:13:41.846: INFO: stderr: "I0720 03:13:41.768719 3379 log.go:181] (0xc000539130) (0xc000e90460) Create stream\nI0720 03:13:41.768879 3379 log.go:181] (0xc000539130) (0xc000e90460) Stream added, broadcasting: 1\nI0720 03:13:41.773946 3379 log.go:181] (0xc000539130) Reply frame received for 1\nI0720 03:13:41.773979 3379 log.go:181] (0xc000539130) (0xc000818280) Create stream\nI0720 03:13:41.773988 3379 log.go:181] (0xc000539130) (0xc000818280) Stream added, broadcasting: 3\nI0720 03:13:41.774765 3379 log.go:181] (0xc000539130) Reply frame received for 3\nI0720 03:13:41.774820 3379 log.go:181] (0xc000539130) (0xc0005201e0) Create stream\nI0720 03:13:41.774843 3379 log.go:181] (0xc000539130) (0xc0005201e0) Stream added, broadcasting: 5\nI0720 03:13:41.775526 3379 log.go:181] (0xc000539130) Reply frame received for 5\nI0720 03:13:41.838518 3379 log.go:181] (0xc000539130) Data frame received for 5\nI0720 03:13:41.838551 3379 log.go:181] (0xc0005201e0) (5) Data frame handling\nI0720 03:13:41.838571 3379 log.go:181] (0xc0005201e0) (5) Data frame sent\nI0720 03:13:41.838582 3379 log.go:181] (0xc000539130) Data frame received for 5\nI0720 03:13:41.838592 3379 log.go:181] (0xc0005201e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31656\nConnection to 172.18.0.12 31656 port [tcp/31656] succeeded!\nI0720 03:13:41.838630 3379 log.go:181] (0xc0005201e0) (5) Data frame sent\nI0720 03:13:41.838887 3379 log.go:181] (0xc000539130) Data frame received for 3\nI0720 03:13:41.838943 3379 log.go:181] (0xc000818280) (3) Data frame handling\nI0720 03:13:41.838979 3379 log.go:181] (0xc000539130) Data frame received for 5\nI0720 03:13:41.839000 3379 log.go:181] (0xc0005201e0) (5) Data frame handling\nI0720 03:13:41.840434 3379 log.go:181] (0xc000539130) Data frame received for 1\nI0720 03:13:41.840454 3379 log.go:181] (0xc000e90460) (1) Data frame handling\nI0720 03:13:41.840486 3379 log.go:181] (0xc000e90460) (1) Data frame sent\nI0720 03:13:41.840521 3379 log.go:181] (0xc000539130) (0xc000e90460) Stream removed, broadcasting: 1\nI0720 03:13:41.840670 3379 log.go:181] (0xc000539130) Go away received\nI0720 03:13:41.841064 3379 log.go:181] (0xc000539130) (0xc000e90460) Stream removed, broadcasting: 1\nI0720 03:13:41.841085 3379 log.go:181] (0xc000539130) (0xc000818280) Stream removed, broadcasting: 3\nI0720 03:13:41.841097 3379 log.go:181] (0xc000539130) (0xc0005201e0) Stream removed, broadcasting: 5\n" Jul 20 03:13:41.846: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:13:41.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9689" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:12.091 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":294,"completed":252,"skipped":3981,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:13:41.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:13:41.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16521150-9559-4f81-ae6e-131d0be191e2" in namespace "downward-api-1745" to be "Succeeded or Failed" Jul 20 03:13:41.947: INFO: Pod "downwardapi-volume-16521150-9559-4f81-ae6e-131d0be191e2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.72087ms Jul 20 03:13:43.952: INFO: Pod "downwardapi-volume-16521150-9559-4f81-ae6e-131d0be191e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008148552s Jul 20 03:13:45.969: INFO: Pod "downwardapi-volume-16521150-9559-4f81-ae6e-131d0be191e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025401303s Jul 20 03:13:47.973: INFO: Pod "downwardapi-volume-16521150-9559-4f81-ae6e-131d0be191e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030043102s STEP: Saw pod success Jul 20 03:13:47.973: INFO: Pod "downwardapi-volume-16521150-9559-4f81-ae6e-131d0be191e2" satisfied condition "Succeeded or Failed" Jul 20 03:13:47.977: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-16521150-9559-4f81-ae6e-131d0be191e2 container client-container: STEP: delete the pod Jul 20 03:13:48.095: INFO: Waiting for pod downwardapi-volume-16521150-9559-4f81-ae6e-131d0be191e2 to disappear Jul 20 03:13:48.136: INFO: Pod downwardapi-volume-16521150-9559-4f81-ae6e-131d0be191e2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:13:48.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1745" for this suite. • [SLOW TEST:6.285 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":253,"skipped":4039,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:13:48.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 20 03:13:49.107: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 20 03:13:51.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811629, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811629, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811629, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811629, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-84c84cf5f9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 03:13:54.151: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:13:54.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:13:55.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9308" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.623 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":294,"completed":254,"skipped":4058,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:13:55.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 03:13:57.195: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 03:13:59.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811637, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811637, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811637, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811636, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 03:14:01.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811637, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811637, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811637, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811636, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 03:14:04.309: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:14:16.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6201" for this suite. STEP: Destroying namespace "webhook-6201-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.881 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":294,"completed":255,"skipped":4070,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:14:16.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-8363eaab-a871-492b-bd9d-59074b48d4ba STEP: Creating a pod to test consume secrets Jul 20 03:14:16.727: INFO: Waiting up to 5m0s for pod "pod-secrets-745c9d4b-5202-4fdb-b5e1-c38a07d091bb" in namespace "secrets-3682" to be "Succeeded or Failed" Jul 20 03:14:16.748: INFO: Pod "pod-secrets-745c9d4b-5202-4fdb-b5e1-c38a07d091bb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.356361ms Jul 20 03:14:18.752: INFO: Pod "pod-secrets-745c9d4b-5202-4fdb-b5e1-c38a07d091bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024446751s Jul 20 03:14:20.756: INFO: Pod "pod-secrets-745c9d4b-5202-4fdb-b5e1-c38a07d091bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028583446s STEP: Saw pod success Jul 20 03:14:20.756: INFO: Pod "pod-secrets-745c9d4b-5202-4fdb-b5e1-c38a07d091bb" satisfied condition "Succeeded or Failed" Jul 20 03:14:20.759: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-745c9d4b-5202-4fdb-b5e1-c38a07d091bb container secret-volume-test: STEP: delete the pod Jul 20 03:14:20.885: INFO: Waiting for pod pod-secrets-745c9d4b-5202-4fdb-b5e1-c38a07d091bb to disappear Jul 20 03:14:20.892: INFO: Pod pod-secrets-745c9d4b-5202-4fdb-b5e1-c38a07d091bb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:14:20.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3682" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":256,"skipped":4080,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:14:20.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 20 03:14:26.108: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:14:26.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3426" for this suite. • [SLOW TEST:5.307 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":294,"completed":257,"skipped":4081,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:14:26.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-75b25223-8eae-4a91-ae1d-216e6a1ccf5a STEP: Creating a pod to test consume secrets Jul 20 03:14:26.322: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-89a93b3e-f613-442f-a09f-87b59dfc1df1" in namespace "projected-8749" to be "Succeeded or Failed" Jul 20 03:14:26.329: INFO: Pod "pod-projected-secrets-89a93b3e-f613-442f-a09f-87b59dfc1df1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328808ms Jul 20 03:14:28.482: INFO: Pod "pod-projected-secrets-89a93b3e-f613-442f-a09f-87b59dfc1df1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159976481s Jul 20 03:14:30.486: INFO: Pod "pod-projected-secrets-89a93b3e-f613-442f-a09f-87b59dfc1df1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163848903s Jul 20 03:14:32.490: INFO: Pod "pod-projected-secrets-89a93b3e-f613-442f-a09f-87b59dfc1df1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167866437s STEP: Saw pod success Jul 20 03:14:32.490: INFO: Pod "pod-projected-secrets-89a93b3e-f613-442f-a09f-87b59dfc1df1" satisfied condition "Succeeded or Failed" Jul 20 03:14:32.494: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-89a93b3e-f613-442f-a09f-87b59dfc1df1 container projected-secret-volume-test: STEP: delete the pod Jul 20 03:14:32.675: INFO: Waiting for pod pod-projected-secrets-89a93b3e-f613-442f-a09f-87b59dfc1df1 to disappear Jul 20 03:14:32.686: INFO: Pod pod-projected-secrets-89a93b3e-f613-442f-a09f-87b59dfc1df1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:14:32.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8749" for this suite. • [SLOW TEST:6.466 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":258,"skipped":4083,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:14:32.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 20 03:14:32.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3831' Jul 20 03:14:32.937: INFO: stderr: "" Jul 20 03:14:32.937: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jul 20 03:14:37.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3831 -o json' Jul 20 03:14:38.096: INFO: stderr: "" Jul 20 03:14:38.096: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-20T03:14:32Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-20T03:14:32Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.131\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-20T03:14:36Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3831\",\n \"resourceVersion\": \"113304\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3831/pods/e2e-test-httpd-pod\",\n \"uid\": \"0f6ded1c-07cb-4ea9-bed6-37fddb604e93\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-hp2d2\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-hp2d2\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-hp2d2\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-20T03:14:32Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-20T03:14:36Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-20T03:14:36Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-20T03:14:32Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://1bff6f1d2e68a16253578e3eb05afe013bc5ca157b4db1cad7e1cccafe4696cd\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-20T03:14:35Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.131\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.131\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-20T03:14:32Z\"\n }\n}\n" STEP: replace the image in the pod Jul 20 03:14:38.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3831' Jul 20 03:14:38.432: INFO: stderr: "" Jul 20 03:14:38.432: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 Jul 20 03:14:38.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3831' Jul 20 03:14:43.835: INFO: stderr: "" Jul 20 03:14:43.835: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:14:43.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3831" for this suite. • [SLOW TEST:11.152 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1572 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":294,"completed":259,"skipped":4092,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:14:43.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 03:14:44.854: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 03:14:46.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811684, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811684, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811684, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811684, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 03:14:48.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811684, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811684, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811684, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811684, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 03:14:52.018: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:14:52.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9952" for this suite. STEP: Destroying namespace "webhook-9952-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.393 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":294,"completed":260,"skipped":4127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:14:52.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-6a04ba99-851f-41d0-a621-46bc206351bb STEP: Creating configMap with name cm-test-opt-upd-652012af-2d9a-4dfb-9f91-0ba6078b0308 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6a04ba99-851f-41d0-a621-46bc206351bb STEP: Updating configmap cm-test-opt-upd-652012af-2d9a-4dfb-9f91-0ba6078b0308 STEP: Creating configMap with name cm-test-opt-create-dc6452f2-d40b-4356-aea3-9b2b915e2f72 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:16:22.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-595" for this suite. • [SLOW TEST:90.603 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":261,"skipped":4196,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:16:22.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-520bf918-206d-497b-8a8f-2b6a23dac02b Jul 20 03:16:22.936: INFO: Pod name my-hostname-basic-520bf918-206d-497b-8a8f-2b6a23dac02b: Found 0 pods out of 1 Jul 20 03:16:27.943: INFO: Pod name my-hostname-basic-520bf918-206d-497b-8a8f-2b6a23dac02b: Found 1 pods out of 1 Jul 20 03:16:27.943: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-520bf918-206d-497b-8a8f-2b6a23dac02b" are running Jul 20 03:16:27.946: INFO: Pod "my-hostname-basic-520bf918-206d-497b-8a8f-2b6a23dac02b-lwhxr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 03:16:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 03:16:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 03:16:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 03:16:22 +0000 UTC Reason: Message:}]) Jul 20 03:16:27.946: INFO: Trying to dial the pod Jul 20 03:16:32.958: INFO: Controller my-hostname-basic-520bf918-206d-497b-8a8f-2b6a23dac02b: Got expected result from replica 1 [my-hostname-basic-520bf918-206d-497b-8a8f-2b6a23dac02b-lwhxr]: "my-hostname-basic-520bf918-206d-497b-8a8f-2b6a23dac02b-lwhxr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:16:32.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8859" for this suite. • [SLOW TEST:10.111 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":262,"skipped":4203,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:16:32.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-3766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3766 to expose endpoints map[] Jul 20 03:16:33.081: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jul 20 03:16:34.086: INFO: successfully validated that service multi-endpoint-test in namespace services-3766 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3766 to expose endpoints map[pod1:[100]] Jul 20 03:16:38.220: INFO: successfully validated that service multi-endpoint-test in namespace services-3766 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-3766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3766 to expose endpoints map[pod1:[100] pod2:[101]] Jul 20 03:16:42.277: INFO: successfully validated that service multi-endpoint-test in namespace services-3766 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-3766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3766 to expose endpoints map[pod2:[101]] Jul 20 03:16:42.366: INFO: successfully validated that service multi-endpoint-test in namespace services-3766 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-3766 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3766 to expose endpoints map[] Jul 20 03:16:43.389: INFO: successfully validated that service multi-endpoint-test in namespace services-3766 exposes endpoints map[] [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:16:43.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3766" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:10.495 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":294,"completed":263,"skipped":4212,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:16:43.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:16:43.591: INFO: Waiting up to 5m0s for pod "busybox-user-65534-bbdd7802-37c7-4317-a46e-964988b9e1c7" in namespace "security-context-test-7345" to be "Succeeded or Failed" Jul 20 03:16:43.600: INFO: Pod "busybox-user-65534-bbdd7802-37c7-4317-a46e-964988b9e1c7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.216529ms Jul 20 03:16:45.604: INFO: Pod "busybox-user-65534-bbdd7802-37c7-4317-a46e-964988b9e1c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013115664s Jul 20 03:16:47.609: INFO: Pod "busybox-user-65534-bbdd7802-37c7-4317-a46e-964988b9e1c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018025967s Jul 20 03:16:47.609: INFO: Pod "busybox-user-65534-bbdd7802-37c7-4317-a46e-964988b9e1c7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:16:47.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7345" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":264,"skipped":4223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:16:47.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-05f9e193-3e2b-471a-94da-7f2bfad50cd1 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:16:47.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9518" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":294,"completed":265,"skipped":4285,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:16:47.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 20 03:16:52.297: INFO: Successfully updated pod "annotationupdateb05c0a65-08bb-4309-8034-98ad1e38e8bf" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:16:56.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7058" for this suite. • [SLOW TEST:8.682 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":266,"skipped":4290,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:16:56.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 20 03:16:56.502: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-486 /api/v1/namespaces/watch-486/configmaps/e2e-watch-test-watch-closed 7cdb5bd1-5f6f-4e10-a3d3-f1e4cdd02c66 114006 0 2020-07-20 03:16:56 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-20 03:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:16:56.502: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-486 /api/v1/namespaces/watch-486/configmaps/e2e-watch-test-watch-closed 7cdb5bd1-5f6f-4e10-a3d3-f1e4cdd02c66 114007 0 2020-07-20 03:16:56 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-20 03:16:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 20 03:16:56.533: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-486 /api/v1/namespaces/watch-486/configmaps/e2e-watch-test-watch-closed 7cdb5bd1-5f6f-4e10-a3d3-f1e4cdd02c66 114008 0 2020-07-20 03:16:56 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-20 03:16:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:16:56.533: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-486 /api/v1/namespaces/watch-486/configmaps/e2e-watch-test-watch-closed 7cdb5bd1-5f6f-4e10-a3d3-f1e4cdd02c66 114009 0 2020-07-20 03:16:56 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-20 03:16:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:16:56.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-486" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":294,"completed":267,"skipped":4295,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:16:56.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:17:02.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9644" for this suite. • [SLOW TEST:5.764 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":294,"completed":268,"skipped":4296,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:17:02.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:17:02.386: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:17:03.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2166" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":294,"completed":269,"skipped":4315,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:17:03.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:17:07.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9208" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":294,"completed":270,"skipped":4324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:17:07.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:17:18.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6833" for this suite. • [SLOW TEST:11.098 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":294,"completed":271,"skipped":4352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:17:18.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 20 03:17:18.767: INFO: Waiting up to 5m0s for pod "pod-f4c0ec69-2de9-42fc-9ab1-1c0bdcd9f3b2" in namespace "emptydir-839" to be "Succeeded or Failed" Jul 20 03:17:18.806: INFO: Pod "pod-f4c0ec69-2de9-42fc-9ab1-1c0bdcd9f3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 38.316866ms Jul 20 03:17:20.813: INFO: Pod "pod-f4c0ec69-2de9-42fc-9ab1-1c0bdcd9f3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045724661s Jul 20 03:17:22.818: INFO: Pod "pod-f4c0ec69-2de9-42fc-9ab1-1c0bdcd9f3b2": Phase="Running", Reason="", readiness=true. Elapsed: 4.050952476s Jul 20 03:17:24.823: INFO: Pod "pod-f4c0ec69-2de9-42fc-9ab1-1c0bdcd9f3b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055372215s STEP: Saw pod success Jul 20 03:17:24.823: INFO: Pod "pod-f4c0ec69-2de9-42fc-9ab1-1c0bdcd9f3b2" satisfied condition "Succeeded or Failed" Jul 20 03:17:24.825: INFO: Trying to get logs from node latest-worker2 pod pod-f4c0ec69-2de9-42fc-9ab1-1c0bdcd9f3b2 container test-container: STEP: delete the pod Jul 20 03:17:24.859: INFO: Waiting for pod pod-f4c0ec69-2de9-42fc-9ab1-1c0bdcd9f3b2 to disappear Jul 20 03:17:24.873: INFO: Pod pod-f4c0ec69-2de9-42fc-9ab1-1c0bdcd9f3b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:17:24.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-839" for this suite. • [SLOW TEST:6.188 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":272,"skipped":4391,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:17:24.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 20 03:17:24.963: INFO: Waiting up to 5m0s for pod "downward-api-0d53d2f7-5fb6-4de4-a950-c45a0da66a87" in namespace "downward-api-8060" to be "Succeeded or Failed" Jul 20 03:17:25.007: INFO: Pod "downward-api-0d53d2f7-5fb6-4de4-a950-c45a0da66a87": Phase="Pending", Reason="", readiness=false. Elapsed: 44.489303ms Jul 20 03:17:27.047: INFO: Pod "downward-api-0d53d2f7-5fb6-4de4-a950-c45a0da66a87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0837693s Jul 20 03:17:29.051: INFO: Pod "downward-api-0d53d2f7-5fb6-4de4-a950-c45a0da66a87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088083906s STEP: Saw pod success Jul 20 03:17:29.051: INFO: Pod "downward-api-0d53d2f7-5fb6-4de4-a950-c45a0da66a87" satisfied condition "Succeeded or Failed" Jul 20 03:17:29.054: INFO: Trying to get logs from node latest-worker2 pod downward-api-0d53d2f7-5fb6-4de4-a950-c45a0da66a87 container dapi-container: STEP: delete the pod Jul 20 03:17:29.112: INFO: Waiting for pod downward-api-0d53d2f7-5fb6-4de4-a950-c45a0da66a87 to disappear Jul 20 03:17:29.115: INFO: Pod downward-api-0d53d2f7-5fb6-4de4-a950-c45a0da66a87 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:17:29.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8060" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":294,"completed":273,"skipped":4399,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:17:29.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 20 03:17:29.438: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-a 5abd4cc9-a471-407d-95ed-2f4471340fa1 114339 0 2020-07-20 03:17:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 03:17:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:17:29.439: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-a 5abd4cc9-a471-407d-95ed-2f4471340fa1 114339 0 2020-07-20 03:17:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 03:17:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 20 03:17:39.446: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-a 5abd4cc9-a471-407d-95ed-2f4471340fa1 114386 0 2020-07-20 03:17:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 03:17:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:17:39.446: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-a 5abd4cc9-a471-407d-95ed-2f4471340fa1 114386 0 2020-07-20 03:17:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 03:17:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 20 03:17:49.454: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-a 5abd4cc9-a471-407d-95ed-2f4471340fa1 114415 0 2020-07-20 03:17:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 03:17:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:17:49.454: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-a 5abd4cc9-a471-407d-95ed-2f4471340fa1 114415 0 2020-07-20 03:17:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 03:17:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 20 03:17:59.468: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-a 5abd4cc9-a471-407d-95ed-2f4471340fa1 114445 0 2020-07-20 03:17:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 03:17:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:17:59.468: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-a 5abd4cc9-a471-407d-95ed-2f4471340fa1 114445 0 2020-07-20 03:17:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-20 03:17:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 20 03:18:09.476: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-b be9d58f3-2a09-42fe-8fc2-da1d9de81de6 114473 0 2020-07-20 03:18:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-20 03:18:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:18:09.476: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-b be9d58f3-2a09-42fe-8fc2-da1d9de81de6 114473 0 2020-07-20 03:18:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-20 03:18:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 20 03:18:19.488: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-b be9d58f3-2a09-42fe-8fc2-da1d9de81de6 114504 0 2020-07-20 03:18:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-20 03:18:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 20 03:18:19.488: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6619 /api/v1/namespaces/watch-6619/configmaps/e2e-watch-test-configmap-b be9d58f3-2a09-42fe-8fc2-da1d9de81de6 114504 0 2020-07-20 03:18:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-20 03:18:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:18:29.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6619" for this suite. • [SLOW TEST:60.378 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":294,"completed":274,"skipped":4402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:18:29.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:18:29.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb177ac7-1ab3-48e4-908b-80a552409cf7" in namespace "projected-6385" to be "Succeeded or Failed" Jul 20 03:18:29.622: INFO: Pod "downwardapi-volume-fb177ac7-1ab3-48e4-908b-80a552409cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.733818ms Jul 20 03:18:31.626: INFO: Pod "downwardapi-volume-fb177ac7-1ab3-48e4-908b-80a552409cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018747313s Jul 20 03:18:33.630: INFO: Pod "downwardapi-volume-fb177ac7-1ab3-48e4-908b-80a552409cf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023067646s STEP: Saw pod success Jul 20 03:18:33.630: INFO: Pod "downwardapi-volume-fb177ac7-1ab3-48e4-908b-80a552409cf7" satisfied condition "Succeeded or Failed" Jul 20 03:18:33.634: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-fb177ac7-1ab3-48e4-908b-80a552409cf7 container client-container: STEP: delete the pod Jul 20 03:18:33.667: INFO: Waiting for pod downwardapi-volume-fb177ac7-1ab3-48e4-908b-80a552409cf7 to disappear Jul 20 03:18:33.682: INFO: Pod downwardapi-volume-fb177ac7-1ab3-48e4-908b-80a552409cf7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:18:33.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6385" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":275,"skipped":4437,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:18:33.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jul 20 03:18:33.772: INFO: >>> kubeConfig: /root/.kube/config Jul 20 03:18:36.735: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:18:47.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7755" for this suite. • [SLOW TEST:13.815 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":294,"completed":276,"skipped":4457,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:18:47.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 03:18:48.305: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 03:18:50.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811928, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811928, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811928, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811928, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 03:18:52.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811928, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811928, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811928, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811928, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 03:18:55.451: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:18:55.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4836" for this suite. STEP: Destroying namespace "webhook-4836-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.094 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":294,"completed":277,"skipped":4475,"failed":0} SS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:18:55.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9149 STEP: creating service affinity-nodeport-transition in namespace services-9149 STEP: creating replication controller affinity-nodeport-transition in namespace services-9149 I0720 03:18:55.755342 8 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9149, replica count: 3 I0720 03:18:58.805691 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 03:19:01.805961 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 03:19:01.816: INFO: Creating new exec pod Jul 20 03:19:06.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9149 execpod-affinity26hm4 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jul 20 03:19:07.063: INFO: stderr: "I0720 03:19:06.969525 3469 log.go:181] (0xc000daafd0) (0xc0009db360) Create stream\nI0720 03:19:06.969579 3469 log.go:181] (0xc000daafd0) (0xc0009db360) Stream added, broadcasting: 1\nI0720 03:19:06.975132 3469 log.go:181] (0xc000daafd0) Reply frame received for 1\nI0720 03:19:06.975190 3469 log.go:181] (0xc000daafd0) (0xc000e84460) Create stream\nI0720 03:19:06.975212 3469 log.go:181] (0xc000daafd0) (0xc000e84460) Stream added, broadcasting: 3\nI0720 03:19:06.976426 3469 log.go:181] (0xc000daafd0) Reply frame received for 3\nI0720 03:19:06.976463 3469 log.go:181] (0xc000daafd0) (0xc000e84500) Create stream\nI0720 03:19:06.976478 3469 log.go:181] (0xc000daafd0) (0xc000e84500) Stream added, broadcasting: 5\nI0720 03:19:06.977397 3469 log.go:181] (0xc000daafd0) Reply frame received for 5\nI0720 03:19:07.052549 3469 log.go:181] (0xc000daafd0) Data frame received for 5\nI0720 03:19:07.052580 3469 log.go:181] (0xc000e84500) (5) Data frame handling\nI0720 03:19:07.052602 3469 log.go:181] (0xc000e84500) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0720 03:19:07.053450 3469 log.go:181] (0xc000daafd0) Data frame received for 5\nI0720 03:19:07.053476 3469 log.go:181] (0xc000e84500) (5) Data frame handling\nI0720 03:19:07.053486 3469 log.go:181] (0xc000e84500) (5) Data frame sent\nI0720 03:19:07.053496 3469 log.go:181] (0xc000daafd0) Data frame received for 5\nI0720 03:19:07.053507 3469 log.go:181] (0xc000e84500) (5) Data frame handling\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0720 03:19:07.053703 3469 log.go:181] (0xc000daafd0) Data frame received for 3\nI0720 03:19:07.053717 3469 log.go:181] (0xc000e84460) (3) Data frame handling\nI0720 03:19:07.055759 3469 log.go:181] (0xc000daafd0) Data frame received for 1\nI0720 03:19:07.055789 3469 log.go:181] (0xc0009db360) (1) Data frame handling\nI0720 03:19:07.055827 3469 log.go:181] (0xc0009db360) (1) Data frame sent\nI0720 03:19:07.055857 3469 log.go:181] (0xc000daafd0) (0xc0009db360) Stream removed, broadcasting: 1\nI0720 03:19:07.055930 3469 log.go:181] (0xc000daafd0) Go away received\nI0720 03:19:07.056503 3469 log.go:181] (0xc000daafd0) (0xc0009db360) Stream removed, broadcasting: 1\nI0720 03:19:07.056524 3469 log.go:181] (0xc000daafd0) (0xc000e84460) Stream removed, broadcasting: 3\nI0720 03:19:07.056536 3469 log.go:181] (0xc000daafd0) (0xc000e84500) Stream removed, broadcasting: 5\n" Jul 20 03:19:07.063: INFO: stdout: "" Jul 20 03:19:07.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9149 execpod-affinity26hm4 -- /bin/sh -x -c nc -zv -t -w 2 10.109.137.88 80' Jul 20 03:19:07.277: INFO: stderr: "I0720 03:19:07.203629 3487 log.go:181] (0xc0008220b0) (0xc000b2f900) Create stream\nI0720 03:19:07.203695 3487 log.go:181] (0xc0008220b0) (0xc000b2f900) Stream added, broadcasting: 1\nI0720 03:19:07.206257 3487 log.go:181] (0xc0008220b0) Reply frame received for 1\nI0720 03:19:07.206296 3487 log.go:181] (0xc0008220b0) (0xc00099d040) Create stream\nI0720 03:19:07.206306 3487 log.go:181] (0xc0008220b0) (0xc00099d040) Stream added, broadcasting: 3\nI0720 03:19:07.207594 3487 log.go:181] (0xc0008220b0) Reply frame received for 3\nI0720 03:19:07.207644 3487 log.go:181] (0xc0008220b0) (0xc000994820) Create stream\nI0720 03:19:07.207666 3487 log.go:181] (0xc0008220b0) (0xc000994820) Stream added, broadcasting: 5\nI0720 03:19:07.209047 3487 log.go:181] (0xc0008220b0) Reply frame received for 5\nI0720 03:19:07.272327 3487 log.go:181] (0xc0008220b0) Data frame received for 5\nI0720 03:19:07.272360 3487 log.go:181] (0xc000994820) (5) Data frame handling\nI0720 03:19:07.272372 3487 log.go:181] (0xc000994820) (5) Data frame sent\nI0720 03:19:07.272378 3487 log.go:181] (0xc0008220b0) Data frame received for 5\nI0720 03:19:07.272392 3487 log.go:181] (0xc000994820) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.137.88 80\nConnection to 10.109.137.88 80 port [tcp/http] succeeded!\nI0720 03:19:07.272411 3487 log.go:181] (0xc0008220b0) Data frame received for 3\nI0720 03:19:07.272423 3487 log.go:181] (0xc00099d040) (3) Data frame handling\nI0720 03:19:07.273623 3487 log.go:181] (0xc0008220b0) Data frame received for 1\nI0720 03:19:07.273641 3487 log.go:181] (0xc000b2f900) (1) Data frame handling\nI0720 03:19:07.273650 3487 log.go:181] (0xc000b2f900) (1) Data frame sent\nI0720 03:19:07.273661 3487 log.go:181] (0xc0008220b0) (0xc000b2f900) Stream removed, broadcasting: 1\nI0720 03:19:07.273763 3487 log.go:181] (0xc0008220b0) Go away received\nI0720 03:19:07.273938 3487 log.go:181] (0xc0008220b0) (0xc000b2f900) Stream removed, broadcasting: 1\nI0720 03:19:07.273952 3487 log.go:181] (0xc0008220b0) (0xc00099d040) Stream removed, broadcasting: 3\nI0720 03:19:07.273960 3487 log.go:181] (0xc0008220b0) (0xc000994820) Stream removed, broadcasting: 5\n" Jul 20 03:19:07.277: INFO: stdout: "" Jul 20 03:19:07.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9149 execpod-affinity26hm4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31828' Jul 20 03:19:07.486: INFO: stderr: "I0720 03:19:07.402642 3505 log.go:181] (0xc000643810) (0xc00020a6e0) Create stream\nI0720 03:19:07.402699 3505 log.go:181] (0xc000643810) (0xc00020a6e0) Stream added, broadcasting: 1\nI0720 03:19:07.405335 3505 log.go:181] (0xc000643810) Reply frame received for 1\nI0720 03:19:07.405378 3505 log.go:181] (0xc000643810) (0xc000302500) Create stream\nI0720 03:19:07.405407 3505 log.go:181] (0xc000643810) (0xc000302500) Stream added, broadcasting: 3\nI0720 03:19:07.406261 3505 log.go:181] (0xc000643810) Reply frame received for 3\nI0720 03:19:07.406300 3505 log.go:181] (0xc000643810) (0xc00032a3c0) Create stream\nI0720 03:19:07.406321 3505 log.go:181] (0xc000643810) (0xc00032a3c0) Stream added, broadcasting: 5\nI0720 03:19:07.407077 3505 log.go:181] (0xc000643810) Reply frame received for 5\nI0720 03:19:07.477253 3505 log.go:181] (0xc000643810) Data frame received for 5\nI0720 03:19:07.477297 3505 log.go:181] (0xc00032a3c0) (5) Data frame handling\nI0720 03:19:07.477348 3505 log.go:181] (0xc00032a3c0) (5) Data frame sent\nI0720 03:19:07.477376 3505 log.go:181] (0xc000643810) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.14 31828\nI0720 03:19:07.477393 3505 log.go:181] (0xc00032a3c0) (5) Data frame handling\nI0720 03:19:07.477416 3505 log.go:181] (0xc00032a3c0) (5) Data frame sent\nConnection to 172.18.0.14 31828 port [tcp/31828] succeeded!\nI0720 03:19:07.477579 3505 log.go:181] (0xc000643810) Data frame received for 3\nI0720 03:19:07.477602 3505 log.go:181] (0xc000302500) (3) Data frame handling\nI0720 03:19:07.477826 3505 log.go:181] (0xc000643810) Data frame received for 5\nI0720 03:19:07.477856 3505 log.go:181] (0xc00032a3c0) (5) Data frame handling\nI0720 03:19:07.479479 3505 log.go:181] (0xc000643810) Data frame received for 1\nI0720 03:19:07.479548 3505 log.go:181] (0xc00020a6e0) (1) Data frame handling\nI0720 03:19:07.479582 3505 log.go:181] (0xc00020a6e0) (1) Data frame sent\nI0720 03:19:07.479624 3505 log.go:181] (0xc000643810) (0xc00020a6e0) Stream removed, broadcasting: 1\nI0720 03:19:07.479678 3505 log.go:181] (0xc000643810) Go away received\nI0720 03:19:07.480084 3505 log.go:181] (0xc000643810) (0xc00020a6e0) Stream removed, broadcasting: 1\nI0720 03:19:07.480115 3505 log.go:181] (0xc000643810) (0xc000302500) Stream removed, broadcasting: 3\nI0720 03:19:07.480135 3505 log.go:181] (0xc000643810) (0xc00032a3c0) Stream removed, broadcasting: 5\n" Jul 20 03:19:07.486: INFO: stdout: "" Jul 20 03:19:07.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9149 execpod-affinity26hm4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31828' Jul 20 03:19:07.703: INFO: stderr: "I0720 03:19:07.614442 3523 log.go:181] (0xc00092afd0) (0xc0009a32c0) Create stream\nI0720 03:19:07.614509 3523 log.go:181] (0xc00092afd0) (0xc0009a32c0) Stream added, broadcasting: 1\nI0720 03:19:07.617927 3523 log.go:181] (0xc00092afd0) Reply frame received for 1\nI0720 03:19:07.617972 3523 log.go:181] (0xc00092afd0) (0xc0009a3360) Create stream\nI0720 03:19:07.617989 3523 log.go:181] (0xc00092afd0) (0xc0009a3360) Stream added, broadcasting: 3\nI0720 03:19:07.619120 3523 log.go:181] (0xc00092afd0) Reply frame received for 3\nI0720 03:19:07.619157 3523 log.go:181] (0xc00092afd0) (0xc0009a3400) Create stream\nI0720 03:19:07.619167 3523 log.go:181] (0xc00092afd0) (0xc0009a3400) Stream added, broadcasting: 5\nI0720 03:19:07.620064 3523 log.go:181] (0xc00092afd0) Reply frame received for 5\nI0720 03:19:07.697202 3523 log.go:181] (0xc00092afd0) Data frame received for 3\nI0720 03:19:07.697243 3523 log.go:181] (0xc0009a3360) (3) Data frame handling\nI0720 03:19:07.697265 3523 log.go:181] (0xc00092afd0) Data frame received for 5\nI0720 03:19:07.697283 3523 log.go:181] (0xc0009a3400) (5) Data frame handling\nI0720 03:19:07.697294 3523 log.go:181] (0xc0009a3400) (5) Data frame sent\nI0720 03:19:07.697302 3523 log.go:181] (0xc00092afd0) Data frame received for 5\nI0720 03:19:07.697309 3523 log.go:181] (0xc0009a3400) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31828\nConnection to 172.18.0.12 31828 port [tcp/31828] succeeded!\nI0720 03:19:07.698569 3523 log.go:181] (0xc00092afd0) Data frame received for 1\nI0720 03:19:07.698589 3523 log.go:181] (0xc0009a32c0) (1) Data frame handling\nI0720 03:19:07.698605 3523 log.go:181] (0xc0009a32c0) (1) Data frame sent\nI0720 03:19:07.698697 3523 log.go:181] (0xc00092afd0) (0xc0009a32c0) Stream removed, broadcasting: 1\nI0720 03:19:07.698751 3523 log.go:181] (0xc00092afd0) Go away received\nI0720 03:19:07.699090 3523 log.go:181] (0xc00092afd0) (0xc0009a32c0) Stream removed, broadcasting: 1\nI0720 03:19:07.699105 3523 log.go:181] (0xc00092afd0) (0xc0009a3360) Stream removed, broadcasting: 3\nI0720 03:19:07.699114 3523 log.go:181] (0xc00092afd0) (0xc0009a3400) Stream removed, broadcasting: 5\n" Jul 20 03:19:07.703: INFO: stdout: "" Jul 20 03:19:07.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9149 execpod-affinity26hm4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:31828/ ; done' Jul 20 03:19:08.036: INFO: stderr: "I0720 03:19:07.844393 3540 log.go:181] (0xc000e996b0) (0xc000b5d900) Create stream\nI0720 03:19:07.844518 3540 log.go:181] (0xc000e996b0) (0xc000b5d900) Stream added, broadcasting: 1\nI0720 03:19:07.848950 3540 log.go:181] (0xc000e996b0) Reply frame received for 1\nI0720 03:19:07.848976 3540 log.go:181] (0xc000e996b0) (0xc00096b0e0) Create stream\nI0720 03:19:07.848983 3540 log.go:181] (0xc000e996b0) (0xc00096b0e0) Stream added, broadcasting: 3\nI0720 03:19:07.851151 3540 log.go:181] (0xc000e996b0) Reply frame received for 3\nI0720 03:19:07.851193 3540 log.go:181] (0xc000e996b0) (0xc000a0b220) Create stream\nI0720 03:19:07.851207 3540 log.go:181] (0xc000e996b0) (0xc000a0b220) Stream added, broadcasting: 5\nI0720 03:19:07.852120 3540 log.go:181] (0xc000e996b0) Reply frame received for 5\nI0720 03:19:07.921439 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.921496 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.921521 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.921559 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.921577 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.921608 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.929221 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.929256 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.929375 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.929619 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.929638 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.929645 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.929660 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.929665 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.929672 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.936370 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.936395 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.936431 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.937328 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.937339 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.937345 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.937361 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.937391 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.937441 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.942416 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.942434 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.942443 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.943379 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.943402 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.943414 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.943429 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.943437 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.943448 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.950065 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.950091 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.950129 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.950502 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.950550 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.950571 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.950591 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.950602 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.950626 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.956247 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.956274 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.956306 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.957070 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.957099 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.957126 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.957277 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.957302 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.957332 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.961638 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.961674 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.961693 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.962134 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.962162 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.962175 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.962189 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.962199 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.962211 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.966877 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.966914 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.966940 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.967687 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.967716 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.967747 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.967763 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.967782 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.967796 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.972008 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.972024 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.972034 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.973262 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.973289 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.973302 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.973320 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.973334 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.973345 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.977700 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.977719 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.977739 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.978601 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.978619 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.978628 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.978648 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.978675 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.978711 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.986227 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.986263 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.986308 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.987093 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.987137 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.987154 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.987172 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.987182 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.987199 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:07.994455 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.994481 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.994502 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.995460 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:07.995495 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:07.995510 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:07.995528 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:07.995539 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:07.995550 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.002824 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.002860 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.002901 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:08.003519 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:08.003548 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:08.003566 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.003591 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.003616 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.003636 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:08.009613 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.009651 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.009678 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:08.010207 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:08.010229 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:08.010249 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.010332 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.010354 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.010367 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:08.013892 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.013936 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.013965 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:08.014893 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.014919 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:08.014948 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:08.014961 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.014975 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.014987 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:08.019387 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.019408 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.019428 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:08.020090 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.020112 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:08.020139 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:08.020163 3540 log.go:181] (0xc000a0b220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.020190 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.020205 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:08.026036 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.026059 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.026079 3540 log.go:181] (0xc00096b0e0) (3) Data frame sent\nI0720 03:19:08.027208 3540 log.go:181] (0xc000e996b0) Data frame received for 5\nI0720 03:19:08.027242 3540 log.go:181] (0xc000a0b220) (5) Data frame handling\nI0720 03:19:08.027450 3540 log.go:181] (0xc000e996b0) Data frame received for 3\nI0720 03:19:08.027469 3540 log.go:181] (0xc00096b0e0) (3) Data frame handling\nI0720 03:19:08.029499 3540 log.go:181] (0xc000e996b0) Data frame received for 1\nI0720 03:19:08.029534 3540 log.go:181] (0xc000b5d900) (1) Data frame handling\nI0720 03:19:08.029557 3540 log.go:181] (0xc000b5d900) (1) Data frame sent\nI0720 03:19:08.029593 3540 log.go:181] (0xc000e996b0) (0xc000b5d900) Stream removed, broadcasting: 1\nI0720 03:19:08.029724 3540 log.go:181] (0xc000e996b0) Go away received\nI0720 03:19:08.030199 3540 log.go:181] (0xc000e996b0) (0xc000b5d900) Stream removed, broadcasting: 1\nI0720 03:19:08.030222 3540 log.go:181] (0xc000e996b0) (0xc00096b0e0) Stream removed, broadcasting: 3\nI0720 03:19:08.030234 3540 log.go:181] (0xc000e996b0) (0xc000a0b220) Stream removed, broadcasting: 5\n" Jul 20 03:19:08.036: INFO: stdout: "\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-vsjvv\naffinity-nodeport-transition-vsjvv\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-vsjvv\naffinity-nodeport-transition-vsjvv\naffinity-nodeport-transition-vsjvv\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-wk5xp\naffinity-nodeport-transition-wk5xp\naffinity-nodeport-transition-vsjvv\naffinity-nodeport-transition-vsjvv\naffinity-nodeport-transition-vsjvv\naffinity-nodeport-transition-wk5xp" Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-vsjvv Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-vsjvv Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-vsjvv Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-vsjvv Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-vsjvv Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-wk5xp Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-wk5xp Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-vsjvv Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-vsjvv Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-vsjvv Jul 20 03:19:08.036: INFO: Received response from host: affinity-nodeport-transition-wk5xp Jul 20 03:19:08.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9149 execpod-affinity26hm4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:31828/ ; done' Jul 20 03:19:08.349: INFO: stderr: "I0720 03:19:08.185599 3558 log.go:181] (0xc00037ad10) (0xc000798640) Create stream\nI0720 03:19:08.185648 3558 log.go:181] (0xc00037ad10) (0xc000798640) Stream added, broadcasting: 1\nI0720 03:19:08.187227 3558 log.go:181] (0xc00037ad10) Reply frame received for 1\nI0720 03:19:08.187259 3558 log.go:181] (0xc00037ad10) (0xc000710280) Create stream\nI0720 03:19:08.187268 3558 log.go:181] (0xc00037ad10) (0xc000710280) Stream added, broadcasting: 3\nI0720 03:19:08.188333 3558 log.go:181] (0xc00037ad10) Reply frame received for 3\nI0720 03:19:08.188358 3558 log.go:181] (0xc00037ad10) (0xc0006e8780) Create stream\nI0720 03:19:08.188365 3558 log.go:181] (0xc00037ad10) (0xc0006e8780) Stream added, broadcasting: 5\nI0720 03:19:08.189408 3558 log.go:181] (0xc00037ad10) Reply frame received for 5\nI0720 03:19:08.255424 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.255456 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.255472 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.255490 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.255500 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.255510 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.259613 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.259642 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.259672 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.260803 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.260842 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.260860 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.260884 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.260898 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.260928 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.265310 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.265333 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.265352 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.266335 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.266359 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.266372 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.266390 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.266400 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.266411 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.271566 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.271601 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.271630 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.272576 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.272609 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.272644 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.272673 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.272698 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.272718 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.277709 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.277746 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.277767 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.278635 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.278656 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.278671 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.278711 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.278736 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.278750 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\nI0720 03:19:08.283898 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.283918 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.283944 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.284908 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.284928 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.284943 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.285307 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.285339 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.285377 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.291972 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.292000 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.292015 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.292361 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.292394 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.292431 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\nI0720 03:19:08.292451 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.292466 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.292481 3558 log.go:181] (0xc000710280) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.295962 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.295986 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.296011 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.296655 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.296679 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.296689 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.296698 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.296842 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.296866 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.300031 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.300051 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.300073 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.300851 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.300872 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.300880 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.300892 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.300898 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.300905 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.304071 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.304100 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.304123 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.305171 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.305188 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.305216 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.305233 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.305242 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.305250 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.310718 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.310749 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.310779 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.311345 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.311359 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.311380 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.311413 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.311435 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.311453 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\nI0720 03:19:08.315118 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.315146 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.315176 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.315551 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.315572 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.315589 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.315645 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.315661 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.315671 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.321034 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.321065 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.321091 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.321290 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.321316 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.321347 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.321364 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.321383 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.321398 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.325967 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.325990 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.326013 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.326462 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.326500 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.326521 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.326538 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.326549 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.326560 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.330034 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.330056 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.330075 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.330445 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.330476 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.330489 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.330504 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.330521 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.330535 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.335228 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.335261 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.335281 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.335653 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.335679 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.335696 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\nI0720 03:19:08.335707 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.335716 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31828/\nI0720 03:19:08.335739 3558 log.go:181] (0xc0006e8780) (5) Data frame sent\nI0720 03:19:08.335912 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.335937 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.335963 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.342790 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.342825 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.342847 3558 log.go:181] (0xc000710280) (3) Data frame sent\nI0720 03:19:08.342867 3558 log.go:181] (0xc00037ad10) Data frame received for 3\nI0720 03:19:08.342883 3558 log.go:181] (0xc000710280) (3) Data frame handling\nI0720 03:19:08.342901 3558 log.go:181] (0xc00037ad10) Data frame received for 5\nI0720 03:19:08.342925 3558 log.go:181] (0xc0006e8780) (5) Data frame handling\nI0720 03:19:08.342951 3558 log.go:181] (0xc00037ad10) Data frame received for 1\nI0720 03:19:08.342982 3558 log.go:181] (0xc000798640) (1) Data frame handling\nI0720 03:19:08.343002 3558 log.go:181] (0xc000798640) (1) Data frame sent\nI0720 03:19:08.343029 3558 log.go:181] (0xc00037ad10) (0xc000798640) Stream removed, broadcasting: 1\nI0720 03:19:08.343063 3558 log.go:181] (0xc00037ad10) Go away received\nI0720 03:19:08.343596 3558 log.go:181] (0xc00037ad10) (0xc000798640) Stream removed, broadcasting: 1\nI0720 03:19:08.343633 3558 log.go:181] (0xc00037ad10) (0xc000710280) Stream removed, broadcasting: 3\nI0720 03:19:08.343654 3558 log.go:181] (0xc00037ad10) (0xc0006e8780) Stream removed, broadcasting: 5\n" Jul 20 03:19:08.350: INFO: stdout: "\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h\naffinity-nodeport-transition-2nm9h" Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Received response from host: affinity-nodeport-transition-2nm9h Jul 20 03:19:08.350: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9149, will wait for the garbage collector to delete the pods Jul 20 03:19:08.461: INFO: Deleting ReplicationController affinity-nodeport-transition took: 7.635736ms Jul 20 03:19:08.861: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.217973ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:19:24.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9149" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:28.422 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":278,"skipped":4477,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:19:24.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:19:40.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5818" for this suite. • [SLOW TEST:16.116 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":294,"completed":279,"skipped":4486,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:19:40.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:19:40.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5bbe36b5-ad8d-46d9-9103-130a273cf261" in namespace "downward-api-6496" to be "Succeeded or Failed" Jul 20 03:19:40.254: INFO: Pod "downwardapi-volume-5bbe36b5-ad8d-46d9-9103-130a273cf261": Phase="Pending", Reason="", readiness=false. Elapsed: 9.823868ms Jul 20 03:19:42.258: INFO: Pod "downwardapi-volume-5bbe36b5-ad8d-46d9-9103-130a273cf261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013837058s Jul 20 03:19:44.261: INFO: Pod "downwardapi-volume-5bbe36b5-ad8d-46d9-9103-130a273cf261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017381435s STEP: Saw pod success Jul 20 03:19:44.261: INFO: Pod "downwardapi-volume-5bbe36b5-ad8d-46d9-9103-130a273cf261" satisfied condition "Succeeded or Failed" Jul 20 03:19:44.264: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-5bbe36b5-ad8d-46d9-9103-130a273cf261 container client-container: STEP: delete the pod Jul 20 03:19:44.334: INFO: Waiting for pod downwardapi-volume-5bbe36b5-ad8d-46d9-9103-130a273cf261 to disappear Jul 20 03:19:44.343: INFO: Pod downwardapi-volume-5bbe36b5-ad8d-46d9-9103-130a273cf261 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:19:44.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6496" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":294,"completed":280,"skipped":4499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:19:44.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 03:19:45.032: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 03:19:47.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811985, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811985, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811985, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730811984, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 03:19:50.095: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:19:51.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9494" for this suite. STEP: Destroying namespace "webhook-9494-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.978 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":294,"completed":281,"skipped":4524,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:19:51.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:19:51.406: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be5ad2c7-5b5e-4591-b92d-11b8da8c2b39" in namespace "downward-api-5130" to be "Succeeded or Failed" Jul 20 03:19:51.416: INFO: Pod "downwardapi-volume-be5ad2c7-5b5e-4591-b92d-11b8da8c2b39": Phase="Pending", Reason="", readiness=false. Elapsed: 9.944459ms Jul 20 03:19:53.420: INFO: Pod "downwardapi-volume-be5ad2c7-5b5e-4591-b92d-11b8da8c2b39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014043843s Jul 20 03:19:55.493: INFO: Pod "downwardapi-volume-be5ad2c7-5b5e-4591-b92d-11b8da8c2b39": Phase="Running", Reason="", readiness=true. Elapsed: 4.087133855s Jul 20 03:19:57.497: INFO: Pod "downwardapi-volume-be5ad2c7-5b5e-4591-b92d-11b8da8c2b39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090705095s STEP: Saw pod success Jul 20 03:19:57.497: INFO: Pod "downwardapi-volume-be5ad2c7-5b5e-4591-b92d-11b8da8c2b39" satisfied condition "Succeeded or Failed" Jul 20 03:19:57.500: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-be5ad2c7-5b5e-4591-b92d-11b8da8c2b39 container client-container: STEP: delete the pod Jul 20 03:19:57.554: INFO: Waiting for pod downwardapi-volume-be5ad2c7-5b5e-4591-b92d-11b8da8c2b39 to disappear Jul 20 03:19:57.559: INFO: Pod downwardapi-volume-be5ad2c7-5b5e-4591-b92d-11b8da8c2b39 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:19:57.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5130" for this suite. • [SLOW TEST:6.239 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":282,"skipped":4534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:19:57.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-afcb7f8d-3398-492b-ae83-9ac411257886 in namespace container-probe-2429 Jul 20 03:20:01.707: INFO: Started pod liveness-afcb7f8d-3398-492b-ae83-9ac411257886 in namespace container-probe-2429 STEP: checking the pod's current state and verifying that restartCount is present Jul 20 03:20:01.710: INFO: Initial restart count of pod liveness-afcb7f8d-3398-492b-ae83-9ac411257886 is 0 Jul 20 03:20:17.748: INFO: Restart count of pod container-probe-2429/liveness-afcb7f8d-3398-492b-ae83-9ac411257886 is now 1 (16.038001602s elapsed) Jul 20 03:20:37.793: INFO: Restart count of pod container-probe-2429/liveness-afcb7f8d-3398-492b-ae83-9ac411257886 is now 2 (36.082603215s elapsed) Jul 20 03:20:57.836: INFO: Restart count of pod container-probe-2429/liveness-afcb7f8d-3398-492b-ae83-9ac411257886 is now 3 (56.126372904s elapsed) Jul 20 03:21:17.921: INFO: Restart count of pod container-probe-2429/liveness-afcb7f8d-3398-492b-ae83-9ac411257886 is now 4 (1m16.210751886s elapsed) Jul 20 03:22:18.086: INFO: Restart count of pod container-probe-2429/liveness-afcb7f8d-3398-492b-ae83-9ac411257886 is now 5 (2m16.375966153s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:22:18.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2429" for this suite. • [SLOW TEST:140.570 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":294,"completed":283,"skipped":4575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:22:18.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:22:18.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fdea6c9-86d6-4a65-b195-9a3ef3b2915d" in namespace "projected-4436" to be "Succeeded or Failed" Jul 20 03:22:18.292: INFO: Pod "downwardapi-volume-1fdea6c9-86d6-4a65-b195-9a3ef3b2915d": Phase="Pending", Reason="", readiness=false. Elapsed: 74.126524ms Jul 20 03:22:20.296: INFO: Pod "downwardapi-volume-1fdea6c9-86d6-4a65-b195-9a3ef3b2915d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077500612s Jul 20 03:22:22.300: INFO: Pod "downwardapi-volume-1fdea6c9-86d6-4a65-b195-9a3ef3b2915d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081475168s STEP: Saw pod success Jul 20 03:22:22.300: INFO: Pod "downwardapi-volume-1fdea6c9-86d6-4a65-b195-9a3ef3b2915d" satisfied condition "Succeeded or Failed" Jul 20 03:22:22.302: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1fdea6c9-86d6-4a65-b195-9a3ef3b2915d container client-container: STEP: delete the pod Jul 20 03:22:22.335: INFO: Waiting for pod downwardapi-volume-1fdea6c9-86d6-4a65-b195-9a3ef3b2915d to disappear Jul 20 03:22:22.345: INFO: Pod downwardapi-volume-1fdea6c9-86d6-4a65-b195-9a3ef3b2915d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:22:22.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4436" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":284,"skipped":4612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:22:22.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 20 03:22:30.521: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 03:22:30.542: INFO: Pod pod-with-poststart-exec-hook still exists Jul 20 03:22:32.542: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 03:22:32.546: INFO: Pod pod-with-poststart-exec-hook still exists Jul 20 03:22:34.542: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 03:22:34.546: INFO: Pod pod-with-poststart-exec-hook still exists Jul 20 03:22:36.542: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 03:22:36.548: INFO: Pod pod-with-poststart-exec-hook still exists Jul 20 03:22:38.542: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 03:22:38.546: INFO: Pod pod-with-poststart-exec-hook still exists Jul 20 03:22:40.542: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 03:22:40.547: INFO: Pod pod-with-poststart-exec-hook still exists Jul 20 03:22:42.542: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 03:22:42.547: INFO: Pod pod-with-poststart-exec-hook still exists Jul 20 03:22:44.542: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 03:22:44.547: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:22:44.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9193" for this suite. • [SLOW TEST:22.204 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":294,"completed":285,"skipped":4641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:22:44.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 20 03:22:44.688: INFO: Create a RollingUpdate DaemonSet Jul 20 03:22:44.692: INFO: Check that daemon pods launch on every node of the cluster Jul 20 03:22:44.712: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:22:44.719: INFO: Number of nodes with available pods: 0 Jul 20 03:22:44.719: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:22:45.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:22:45.728: INFO: Number of nodes with available pods: 0 Jul 20 03:22:45.728: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:22:46.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:22:46.862: INFO: Number of nodes with available pods: 0 Jul 20 03:22:46.862: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:22:47.784: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:22:47.798: INFO: Number of nodes with available pods: 0 Jul 20 03:22:47.798: INFO: Node latest-worker is running more than one daemon pod Jul 20 03:22:48.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:22:48.728: INFO: Number of nodes with available pods: 2 Jul 20 03:22:48.728: INFO: Number of running nodes: 2, number of available pods: 2 Jul 20 03:22:48.728: INFO: Update the DaemonSet to trigger a rollout Jul 20 03:22:48.736: INFO: Updating DaemonSet daemon-set Jul 20 03:23:04.756: INFO: Roll back the DaemonSet before rollout is complete Jul 20 03:23:04.763: INFO: Updating DaemonSet daemon-set Jul 20 03:23:04.763: INFO: Make sure DaemonSet rollback is complete Jul 20 03:23:04.774: INFO: Wrong image for pod: daemon-set-tcxpt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 20 03:23:04.775: INFO: Pod daemon-set-tcxpt is not available Jul 20 03:23:04.818: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:23:05.822: INFO: Wrong image for pod: daemon-set-tcxpt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 20 03:23:05.822: INFO: Pod daemon-set-tcxpt is not available Jul 20 03:23:05.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 03:23:06.823: INFO: Pod daemon-set-v8vwr is not available Jul 20 03:23:06.874: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8685, will wait for the garbage collector to delete the pods Jul 20 03:23:06.944: INFO: Deleting DaemonSet.extensions daemon-set took: 6.865361ms Jul 20 03:23:07.445: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.395539ms Jul 20 03:23:13.247: INFO: Number of nodes with available pods: 0 Jul 20 03:23:13.247: INFO: Number of running nodes: 0, number of available pods: 0 Jul 20 03:23:13.249: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8685/daemonsets","resourceVersion":"116018"},"items":null} Jul 20 03:23:13.253: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8685/pods","resourceVersion":"116018"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:23:13.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8685" for this suite. • [SLOW TEST:28.712 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":294,"completed":286,"skipped":4678,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:23:13.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-779c4e37-72c5-47af-b186-ac7bd53e6cdf STEP: Creating a pod to test consume secrets Jul 20 03:23:13.370: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f94e92be-4c2f-4d0a-89d8-36e1cb71c0c5" in namespace "projected-696" to be "Succeeded or Failed" Jul 20 03:23:13.389: INFO: Pod "pod-projected-secrets-f94e92be-4c2f-4d0a-89d8-36e1cb71c0c5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.110334ms Jul 20 03:23:15.393: INFO: Pod "pod-projected-secrets-f94e92be-4c2f-4d0a-89d8-36e1cb71c0c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02335963s Jul 20 03:23:17.397: INFO: Pod "pod-projected-secrets-f94e92be-4c2f-4d0a-89d8-36e1cb71c0c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027146111s STEP: Saw pod success Jul 20 03:23:17.397: INFO: Pod "pod-projected-secrets-f94e92be-4c2f-4d0a-89d8-36e1cb71c0c5" satisfied condition "Succeeded or Failed" Jul 20 03:23:17.400: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-f94e92be-4c2f-4d0a-89d8-36e1cb71c0c5 container projected-secret-volume-test: STEP: delete the pod Jul 20 03:23:17.437: INFO: Waiting for pod pod-projected-secrets-f94e92be-4c2f-4d0a-89d8-36e1cb71c0c5 to disappear Jul 20 03:23:17.445: INFO: Pod pod-projected-secrets-f94e92be-4c2f-4d0a-89d8-36e1cb71c0c5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:23:17.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-696" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":287,"skipped":4681,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:23:17.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-8cfc284c-c4e3-49d2-968c-66b8f12f732c STEP: Creating secret with name secret-projected-all-test-volume-1d5bc98f-5d29-41e7-9079-4466a79411d3 STEP: Creating a pod to test Check all projections for projected volume plugin Jul 20 03:23:17.569: INFO: Waiting up to 5m0s for pod "projected-volume-5ac7b69b-8d96-485f-a714-f2663723762a" in namespace "projected-9780" to be "Succeeded or Failed" Jul 20 03:23:17.636: INFO: Pod "projected-volume-5ac7b69b-8d96-485f-a714-f2663723762a": Phase="Pending", Reason="", readiness=false. Elapsed: 66.565069ms Jul 20 03:23:19.946: INFO: Pod "projected-volume-5ac7b69b-8d96-485f-a714-f2663723762a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376988552s Jul 20 03:23:21.951: INFO: Pod "projected-volume-5ac7b69b-8d96-485f-a714-f2663723762a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.381503355s STEP: Saw pod success Jul 20 03:23:21.951: INFO: Pod "projected-volume-5ac7b69b-8d96-485f-a714-f2663723762a" satisfied condition "Succeeded or Failed" Jul 20 03:23:21.954: INFO: Trying to get logs from node latest-worker2 pod projected-volume-5ac7b69b-8d96-485f-a714-f2663723762a container projected-all-volume-test: STEP: delete the pod Jul 20 03:23:22.169: INFO: Waiting for pod projected-volume-5ac7b69b-8d96-485f-a714-f2663723762a to disappear Jul 20 03:23:22.176: INFO: Pod projected-volume-5ac7b69b-8d96-485f-a714-f2663723762a no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:23:22.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9780" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":294,"completed":288,"skipped":4689,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:23:22.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 20 03:23:22.300: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44fa6d0b-2e89-417f-949d-e771110411d3" in namespace "downward-api-637" to be "Succeeded or Failed" Jul 20 03:23:22.308: INFO: Pod "downwardapi-volume-44fa6d0b-2e89-417f-949d-e771110411d3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.230479ms Jul 20 03:23:24.311: INFO: Pod "downwardapi-volume-44fa6d0b-2e89-417f-949d-e771110411d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011080287s Jul 20 03:23:26.316: INFO: Pod "downwardapi-volume-44fa6d0b-2e89-417f-949d-e771110411d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015412652s STEP: Saw pod success Jul 20 03:23:26.316: INFO: Pod "downwardapi-volume-44fa6d0b-2e89-417f-949d-e771110411d3" satisfied condition "Succeeded or Failed" Jul 20 03:23:26.319: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-44fa6d0b-2e89-417f-949d-e771110411d3 container client-container: STEP: delete the pod Jul 20 03:23:26.366: INFO: Waiting for pod downwardapi-volume-44fa6d0b-2e89-417f-949d-e771110411d3 to disappear Jul 20 03:23:26.372: INFO: Pod downwardapi-volume-44fa6d0b-2e89-417f-949d-e771110411d3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:23:26.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-637" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":289,"skipped":4736,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:23:26.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-943ecc22-0686-484d-a6d7-afcdfa775311 STEP: Creating a pod to test consume configMaps Jul 20 03:23:26.447: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5db94ace-4752-4824-9488-a4ee1147a1c5" in namespace "projected-598" to be "Succeeded or Failed" Jul 20 03:23:26.464: INFO: Pod "pod-projected-configmaps-5db94ace-4752-4824-9488-a4ee1147a1c5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.947451ms Jul 20 03:23:28.468: INFO: Pod "pod-projected-configmaps-5db94ace-4752-4824-9488-a4ee1147a1c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021838998s Jul 20 03:23:30.473: INFO: Pod "pod-projected-configmaps-5db94ace-4752-4824-9488-a4ee1147a1c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026357179s STEP: Saw pod success Jul 20 03:23:30.473: INFO: Pod "pod-projected-configmaps-5db94ace-4752-4824-9488-a4ee1147a1c5" satisfied condition "Succeeded or Failed" Jul 20 03:23:30.476: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-5db94ace-4752-4824-9488-a4ee1147a1c5 container projected-configmap-volume-test: STEP: delete the pod Jul 20 03:23:30.508: INFO: Waiting for pod pod-projected-configmaps-5db94ace-4752-4824-9488-a4ee1147a1c5 to disappear Jul 20 03:23:30.538: INFO: Pod pod-projected-configmaps-5db94ace-4752-4824-9488-a4ee1147a1c5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:23:30.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-598" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":290,"skipped":4796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:23:30.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7874 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 20 03:23:30.606: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 20 03:23:30.709: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 03:23:32.910: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 20 03:23:34.714: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 03:23:36.724: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 03:23:38.714: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 03:23:40.713: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 03:23:42.742: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 03:23:44.713: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 20 03:23:46.735: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 20 03:23:46.742: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 20 03:23:48.745: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 20 03:23:50.760: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 20 03:23:54.784: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.168:8080/dial?request=hostname&protocol=udp&host=10.244.1.67&port=8081&tries=1'] Namespace:pod-network-test-7874 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 03:23:54.784: INFO: >>> kubeConfig: /root/.kube/config I0720 03:23:54.815650 8 log.go:181] (0xc0024b8580) (0xc002f610e0) Create stream I0720 03:23:54.815690 8 log.go:181] (0xc0024b8580) (0xc002f610e0) Stream added, broadcasting: 1 I0720 03:23:54.817527 8 log.go:181] (0xc0024b8580) Reply frame received for 1 I0720 03:23:54.817564 8 log.go:181] (0xc0024b8580) (0xc0011c9ae0) Create stream I0720 03:23:54.817572 8 log.go:181] (0xc0024b8580) (0xc0011c9ae0) Stream added, broadcasting: 3 I0720 03:23:54.818631 8 log.go:181] (0xc0024b8580) Reply frame received for 3 I0720 03:23:54.818665 8 log.go:181] (0xc0024b8580) (0xc0032c2b40) Create stream I0720 03:23:54.818676 8 log.go:181] (0xc0024b8580) (0xc0032c2b40) Stream added, broadcasting: 5 I0720 03:23:54.819475 8 log.go:181] (0xc0024b8580) Reply frame received for 5 I0720 03:23:54.929544 8 log.go:181] (0xc0024b8580) Data frame received for 3 I0720 03:23:54.929571 8 log.go:181] (0xc0011c9ae0) (3) Data frame handling I0720 03:23:54.929586 8 log.go:181] (0xc0011c9ae0) (3) Data frame sent I0720 03:23:54.930188 8 log.go:181] (0xc0024b8580) Data frame received for 5 I0720 03:23:54.930222 8 log.go:181] (0xc0032c2b40) (5) Data frame handling I0720 03:23:54.930264 8 log.go:181] (0xc0024b8580) Data frame received for 3 I0720 03:23:54.930287 8 log.go:181] (0xc0011c9ae0) (3) Data frame handling I0720 03:23:54.932567 8 log.go:181] (0xc0024b8580) Data frame received for 1 I0720 03:23:54.932584 8 log.go:181] (0xc002f610e0) (1) Data frame handling I0720 03:23:54.932595 8 log.go:181] (0xc002f610e0) (1) Data frame sent I0720 03:23:54.932607 8 log.go:181] (0xc0024b8580) (0xc002f610e0) Stream removed, broadcasting: 1 I0720 03:23:54.932670 8 log.go:181] (0xc0024b8580) Go away received I0720 03:23:54.932851 8 log.go:181] (0xc0024b8580) (0xc002f610e0) Stream removed, broadcasting: 1 I0720 03:23:54.932870 8 log.go:181] (0xc0024b8580) (0xc0011c9ae0) Stream removed, broadcasting: 3 I0720 03:23:54.932902 8 log.go:181] (0xc0024b8580) (0xc0032c2b40) Stream removed, broadcasting: 5 Jul 20 03:23:54.932: INFO: Waiting for responses: map[] Jul 20 03:23:54.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.168:8080/dial?request=hostname&protocol=udp&host=10.244.2.167&port=8081&tries=1'] Namespace:pod-network-test-7874 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 03:23:54.936: INFO: >>> kubeConfig: /root/.kube/config I0720 03:23:54.968934 8 log.go:181] (0xc0027ca210) (0xc00103a6e0) Create stream I0720 03:23:54.968964 8 log.go:181] (0xc0027ca210) (0xc00103a6e0) Stream added, broadcasting: 1 I0720 03:23:54.975400 8 log.go:181] (0xc0027ca210) Reply frame received for 1 I0720 03:23:54.975470 8 log.go:181] (0xc0027ca210) (0xc00103a8c0) Create stream I0720 03:23:54.975493 8 log.go:181] (0xc0027ca210) (0xc00103a8c0) Stream added, broadcasting: 3 I0720 03:23:54.976421 8 log.go:181] (0xc0027ca210) Reply frame received for 3 I0720 03:23:54.976465 8 log.go:181] (0xc0027ca210) (0xc002f61360) Create stream I0720 03:23:54.976481 8 log.go:181] (0xc0027ca210) (0xc002f61360) Stream added, broadcasting: 5 I0720 03:23:54.977291 8 log.go:181] (0xc0027ca210) Reply frame received for 5 I0720 03:23:55.032102 8 log.go:181] (0xc0027ca210) Data frame received for 3 I0720 03:23:55.032133 8 log.go:181] (0xc00103a8c0) (3) Data frame handling I0720 03:23:55.032150 8 log.go:181] (0xc00103a8c0) (3) Data frame sent I0720 03:23:55.032500 8 log.go:181] (0xc0027ca210) Data frame received for 5 I0720 03:23:55.032540 8 log.go:181] (0xc002f61360) (5) Data frame handling I0720 03:23:55.032837 8 log.go:181] (0xc0027ca210) Data frame received for 3 I0720 03:23:55.032862 8 log.go:181] (0xc00103a8c0) (3) Data frame handling I0720 03:23:55.034302 8 log.go:181] (0xc0027ca210) Data frame received for 1 I0720 03:23:55.034329 8 log.go:181] (0xc00103a6e0) (1) Data frame handling I0720 03:23:55.034354 8 log.go:181] (0xc00103a6e0) (1) Data frame sent I0720 03:23:55.034477 8 log.go:181] (0xc0027ca210) (0xc00103a6e0) Stream removed, broadcasting: 1 I0720 03:23:55.034513 8 log.go:181] (0xc0027ca210) Go away received I0720 03:23:55.034672 8 log.go:181] (0xc0027ca210) (0xc00103a6e0) Stream removed, broadcasting: 1 I0720 03:23:55.034704 8 log.go:181] (0xc0027ca210) (0xc00103a8c0) Stream removed, broadcasting: 3 I0720 03:23:55.034719 8 log.go:181] (0xc0027ca210) (0xc002f61360) Stream removed, broadcasting: 5 Jul 20 03:23:55.034: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:23:55.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7874" for this suite. • [SLOW TEST:24.497 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":294,"completed":291,"skipped":4833,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:23:55.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Jul 20 03:23:55.133: INFO: Waiting up to 5m0s for pod "pod-5a647b77-77a0-4816-895b-c6dbe316a5c5" in namespace "emptydir-3813" to be "Succeeded or Failed" Jul 20 03:23:55.145: INFO: Pod "pod-5a647b77-77a0-4816-895b-c6dbe316a5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.469907ms Jul 20 03:23:57.148: INFO: Pod "pod-5a647b77-77a0-4816-895b-c6dbe316a5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015297774s Jul 20 03:23:59.152: INFO: Pod "pod-5a647b77-77a0-4816-895b-c6dbe316a5c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018870884s STEP: Saw pod success Jul 20 03:23:59.152: INFO: Pod "pod-5a647b77-77a0-4816-895b-c6dbe316a5c5" satisfied condition "Succeeded or Failed" Jul 20 03:23:59.155: INFO: Trying to get logs from node latest-worker2 pod pod-5a647b77-77a0-4816-895b-c6dbe316a5c5 container test-container: STEP: delete the pod Jul 20 03:23:59.245: INFO: Waiting for pod pod-5a647b77-77a0-4816-895b-c6dbe316a5c5 to disappear Jul 20 03:23:59.299: INFO: Pod pod-5a647b77-77a0-4816-895b-c6dbe316a5c5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:23:59.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3813" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":292,"skipped":4873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:23:59.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-4060/secret-test-1842f4ef-a2c3-4b43-a42f-fea363dcc028 STEP: Creating a pod to test consume secrets Jul 20 03:23:59.421: INFO: Waiting up to 5m0s for pod "pod-configmaps-a047934b-d4a4-470b-b763-55f7141f9e6a" in namespace "secrets-4060" to be "Succeeded or Failed" Jul 20 03:23:59.440: INFO: Pod "pod-configmaps-a047934b-d4a4-470b-b763-55f7141f9e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.632953ms Jul 20 03:24:01.444: INFO: Pod "pod-configmaps-a047934b-d4a4-470b-b763-55f7141f9e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023772911s Jul 20 03:24:03.473: INFO: Pod "pod-configmaps-a047934b-d4a4-470b-b763-55f7141f9e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052200072s Jul 20 03:24:05.476: INFO: Pod "pod-configmaps-a047934b-d4a4-470b-b763-55f7141f9e6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055125193s STEP: Saw pod success Jul 20 03:24:05.476: INFO: Pod "pod-configmaps-a047934b-d4a4-470b-b763-55f7141f9e6a" satisfied condition "Succeeded or Failed" Jul 20 03:24:05.478: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-a047934b-d4a4-470b-b763-55f7141f9e6a container env-test: STEP: delete the pod Jul 20 03:24:05.521: INFO: Waiting for pod pod-configmaps-a047934b-d4a4-470b-b763-55f7141f9e6a to disappear Jul 20 03:24:05.598: INFO: Pod pod-configmaps-a047934b-d4a4-470b-b763-55f7141f9e6a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:24:05.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4060" for this suite. • [SLOW TEST:6.299 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":293,"skipped":4899,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 20 03:24:05.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-64bc2466-917e-4048-882f-3be436aebacd STEP: Creating a pod to test consume secrets Jul 20 03:24:05.675: INFO: Waiting up to 5m0s for pod "pod-secrets-97e07805-5b26-44fa-828e-e33181df9e1b" in namespace "secrets-2605" to be "Succeeded or Failed" Jul 20 03:24:05.679: INFO: Pod "pod-secrets-97e07805-5b26-44fa-828e-e33181df9e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.443195ms Jul 20 03:24:07.683: INFO: Pod "pod-secrets-97e07805-5b26-44fa-828e-e33181df9e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007491704s Jul 20 03:24:09.687: INFO: Pod "pod-secrets-97e07805-5b26-44fa-828e-e33181df9e1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011538124s STEP: Saw pod success Jul 20 03:24:09.687: INFO: Pod "pod-secrets-97e07805-5b26-44fa-828e-e33181df9e1b" satisfied condition "Succeeded or Failed" Jul 20 03:24:09.689: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-97e07805-5b26-44fa-828e-e33181df9e1b container secret-volume-test: STEP: delete the pod Jul 20 03:24:09.726: INFO: Waiting for pod pod-secrets-97e07805-5b26-44fa-828e-e33181df9e1b to disappear Jul 20 03:24:09.766: INFO: Pod pod-secrets-97e07805-5b26-44fa-828e-e33181df9e1b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 20 03:24:09.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2605" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":294,"skipped":4913,"failed":0} SSSSSSSJul 20 03:24:09.774: INFO: Running AfterSuite actions on all nodes Jul 20 03:24:09.774: INFO: Running AfterSuite actions on node 1 Jul 20 03:24:09.774: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":294,"completed":294,"skipped":4920,"failed":0} Ran 294 of 5214 Specs in 6065.138 seconds SUCCESS! -- 294 Passed | 0 Failed | 0 Pending | 4920 Skipped PASS