I0817 10:53:36.705920 10 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0817 10:53:36.711714 10 e2e.go:129] Starting e2e run "6247fac7-7b4a-49ee-8e2e-c02fa38d14a8" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597661602 - Will randomize all specs Will run 303 of 5237 specs Aug 17 10:53:37.270: INFO: >>> kubeConfig: /root/.kube/config Aug 17 10:53:37.324: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 17 10:53:37.517: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 17 10:53:37.751: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 17 10:53:37.751: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 17 10:53:37.752: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 17 10:53:37.806: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 17 10:53:37.806: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 17 10:53:37.806: INFO: e2e test version: v1.19.0-rc.4 Aug 17 10:53:37.812: INFO: kube-apiserver version: v1.19.0-rc.1 Aug 17 10:53:37.813: INFO: >>> kubeConfig: /root/.kube/config Aug 17 10:53:37.835: INFO: Cluster IP family: ipv4 SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 10:53:37.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Aug 17 10:53:37.914: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-f8c9825c-686e-4fb3-9334-f17dca134046 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 10:53:37.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9631" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 10:53:37.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-7d0f9fbd-7d5a-49cb-a0cc-a54d602111a6 Aug 17 10:53:38.160: INFO: Pod name my-hostname-basic-7d0f9fbd-7d5a-49cb-a0cc-a54d602111a6: Found 1 pods out of 1 Aug 17 10:53:38.160: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7d0f9fbd-7d5a-49cb-a0cc-a54d602111a6" are running Aug 17 10:53:42.304: INFO: Pod "my-hostname-basic-7d0f9fbd-7d5a-49cb-a0cc-a54d602111a6-6dbtt" is running (conditions: [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 10:53:38 +0000 UTC Reason: Message:}]) Aug 17 10:53:42.313: INFO: Trying to dial the pod Aug 17 10:53:47.334: INFO: Controller my-hostname-basic-7d0f9fbd-7d5a-49cb-a0cc-a54d602111a6: Got expected result from replica 1 [my-hostname-basic-7d0f9fbd-7d5a-49cb-a0cc-a54d602111a6-6dbtt]: "my-hostname-basic-7d0f9fbd-7d5a-49cb-a0cc-a54d602111a6-6dbtt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 10:53:47.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3004" for this suite. • [SLOW TEST:9.377 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":2,"skipped":17,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 10:53:47.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 10:53:47.495: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 10:53:47.586: INFO: Waiting for terminating namespaces to be deleted... Aug 17 10:53:47.594: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 10:53:47.611: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 10:53:47.611: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 10:53:47.611: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 10:53:47.611: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 10:53:47.611: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 10:53:47.620: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 10:53:47.620: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 10:53:47.620: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 10:53:47.621: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 10:53:47.621: INFO: my-hostname-basic-7d0f9fbd-7d5a-49cb-a0cc-a54d602111a6-6dbtt from replication-controller-3004 started at 2020-08-17 10:53:38 +0000 UTC (1 container statuses recorded) Aug 17 10:53:47.621: INFO: Container my-hostname-basic-7d0f9fbd-7d5a-49cb-a0cc-a54d602111a6 ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d4319b0b-b3dc-4f9d-a16b-f1cf6a73ac77 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-d4319b0b-b3dc-4f9d-a16b-f1cf6a73ac77 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d4319b0b-b3dc-4f9d-a16b-f1cf6a73ac77 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 10:59:02.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4239" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:315.122 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":3,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 10:59:02.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 10:59:06.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3484" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":4,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 10:59:06.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 17 10:59:14.300: INFO: 5 pods remaining Aug 17 10:59:14.300: INFO: 0 pods has nil DeletionTimestamp Aug 17 10:59:14.301: INFO: Aug 17 10:59:16.151: INFO: 0 pods remaining Aug 17 10:59:16.152: INFO: 0 pods has nil DeletionTimestamp Aug 17 10:59:16.152: INFO: STEP: Gathering metrics W0817 10:59:18.532478 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 11:00:20.948: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:00:20.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5835" for this suite. • [SLOW TEST:74.195 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":5,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:00:20.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 11:00:21.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-894f35cc-f88d-440a-8211-3b1558ce538b" in namespace "downward-api-4388" to be "Succeeded or Failed" Aug 17 11:00:21.118: INFO: Pod "downwardapi-volume-894f35cc-f88d-440a-8211-3b1558ce538b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.364779ms Aug 17 11:00:23.126: INFO: Pod "downwardapi-volume-894f35cc-f88d-440a-8211-3b1558ce538b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026754832s Aug 17 11:00:25.133: INFO: Pod "downwardapi-volume-894f35cc-f88d-440a-8211-3b1558ce538b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033560156s STEP: Saw pod success Aug 17 11:00:25.133: INFO: Pod "downwardapi-volume-894f35cc-f88d-440a-8211-3b1558ce538b" satisfied condition "Succeeded or Failed" Aug 17 11:00:25.138: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-894f35cc-f88d-440a-8211-3b1558ce538b container client-container: STEP: delete the pod Aug 17 11:00:25.196: INFO: Waiting for pod downwardapi-volume-894f35cc-f88d-440a-8211-3b1558ce538b to disappear Aug 17 11:00:25.201: INFO: Pod downwardapi-volume-894f35cc-f88d-440a-8211-3b1558ce538b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:00:25.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4388" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":164,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:00:25.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:00:59.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-454" for this suite. • [SLOW TEST:34.619 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":7,"skipped":183,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:00:59.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-29fa23f1-dfec-47db-8695-a60dd194a207 STEP: Creating a pod to test consume secrets Aug 17 11:01:00.198: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-867f89cc-311c-49c0-8ca6-a28b7b122844" in namespace "projected-7165" to be "Succeeded or Failed" Aug 17 11:01:00.268: INFO: Pod "pod-projected-secrets-867f89cc-311c-49c0-8ca6-a28b7b122844": Phase="Pending", Reason="", readiness=false. Elapsed: 69.922528ms Aug 17 11:01:02.291: INFO: Pod "pod-projected-secrets-867f89cc-311c-49c0-8ca6-a28b7b122844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093570631s Aug 17 11:01:04.297: INFO: Pod "pod-projected-secrets-867f89cc-311c-49c0-8ca6-a28b7b122844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098961278s STEP: Saw pod success Aug 17 11:01:04.297: INFO: Pod "pod-projected-secrets-867f89cc-311c-49c0-8ca6-a28b7b122844" satisfied condition "Succeeded or Failed" Aug 17 11:01:04.301: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-867f89cc-311c-49c0-8ca6-a28b7b122844 container projected-secret-volume-test: STEP: delete the pod Aug 17 11:01:04.357: INFO: Waiting for pod pod-projected-secrets-867f89cc-311c-49c0-8ca6-a28b7b122844 to disappear Aug 17 11:01:04.380: INFO: Pod pod-projected-secrets-867f89cc-311c-49c0-8ca6-a28b7b122844 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:01:04.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7165" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:01:04.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 11:01:06.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733258866, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733258866, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733258866, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733258866, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 11:01:08.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733258866, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733258866, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733258866, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733258866, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 11:01:11.654: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:01:21.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4824" for this suite. STEP: Destroying namespace "webhook-4824-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.551 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":9,"skipped":206,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:01:21.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-db74089e-6c6d-460e-b5d5-c9b7d61d8681 in namespace container-probe-4426 Aug 17 11:01:28.070: INFO: Started pod liveness-db74089e-6c6d-460e-b5d5-c9b7d61d8681 in namespace container-probe-4426 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 11:01:28.076: INFO: Initial restart count of pod liveness-db74089e-6c6d-460e-b5d5-c9b7d61d8681 is 0 Aug 17 11:01:48.145: INFO: Restart count of pod container-probe-4426/liveness-db74089e-6c6d-460e-b5d5-c9b7d61d8681 is now 1 (20.067557746s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:01:48.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4426" for this suite. • [SLOW TEST:26.263 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":10,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:01:48.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 17 11:01:48.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9499' Aug 17 11:01:52.633: INFO: stderr: "" Aug 17 11:01:52.633: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 17 11:01:57.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9499 -o json' Aug 17 11:01:59.097: INFO: stderr: "" Aug 17 11:01:59.097: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-17T11:01:52Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-17T11:01:52Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.161\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-17T11:01:55Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9499\",\n \"resourceVersion\": \"699240\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9499/pods/e2e-test-httpd-pod\",\n \"uid\": \"1be8ed8d-8240-4e48-995a-09f7999a915d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-84q7x\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-84q7x\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-84q7x\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-17T11:01:52Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-17T11:01:55Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-17T11:01:55Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-17T11:01:52Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://51bc83e3b6fa8d1f1594b986e55aaf39dd53426ec4e5be35d7bbf8c9a98b65ff\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-17T11:01:55Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.14\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.161\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.161\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-17T11:01:52Z\"\n }\n}\n" STEP: replace the image in the pod Aug 17 11:01:59.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9499' Aug 17 11:02:01.945: INFO: stderr: "" Aug 17 11:02:01.945: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Aug 17 11:02:02.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9499' Aug 17 11:02:07.852: INFO: stderr: "" Aug 17 11:02:07.852: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:02:07.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9499" for this suite. • [SLOW TEST:19.677 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":11,"skipped":231,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:02:07.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:02:25.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8773" for this suite. • [SLOW TEST:17.332 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":12,"skipped":242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:02:25.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-bbe5a0c6-e23c-4d7d-a1bd-3f65ac31f538 STEP: Creating a pod to test consume configMaps Aug 17 11:02:25.493: INFO: Waiting up to 5m0s for pod "pod-configmaps-29cedbfd-75b5-4708-9af1-504cccc6230c" in namespace "configmap-363" to be "Succeeded or Failed" Aug 17 11:02:25.519: INFO: Pod "pod-configmaps-29cedbfd-75b5-4708-9af1-504cccc6230c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.811968ms Aug 17 11:02:27.527: INFO: Pod "pod-configmaps-29cedbfd-75b5-4708-9af1-504cccc6230c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033392028s Aug 17 11:02:29.537: INFO: Pod "pod-configmaps-29cedbfd-75b5-4708-9af1-504cccc6230c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043286384s Aug 17 11:02:31.629: INFO: Pod "pod-configmaps-29cedbfd-75b5-4708-9af1-504cccc6230c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135472036s STEP: Saw pod success Aug 17 11:02:31.629: INFO: Pod "pod-configmaps-29cedbfd-75b5-4708-9af1-504cccc6230c" satisfied condition "Succeeded or Failed" Aug 17 11:02:31.634: INFO: Trying to get logs from node latest-worker pod pod-configmaps-29cedbfd-75b5-4708-9af1-504cccc6230c container configmap-volume-test: STEP: delete the pod Aug 17 11:02:31.986: INFO: Waiting for pod pod-configmaps-29cedbfd-75b5-4708-9af1-504cccc6230c to disappear Aug 17 11:02:32.017: INFO: Pod pod-configmaps-29cedbfd-75b5-4708-9af1-504cccc6230c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:02:32.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-363" for this suite. • [SLOW TEST:6.856 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":13,"skipped":278,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:02:32.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 17 11:02:32.169: INFO: Created pod &Pod{ObjectMeta:{dns-7037 dns-7037 /api/v1/namespaces/dns-7037/pods/dns-7037 4c6559e7-2d32-4b62-98fe-b2fdf023fb2c 699459 0 2020-08-17 11:02:32 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-17 11:02:32 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmdvb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmdvb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmdvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:02:32.389: INFO: The status of Pod dns-7037 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:02:34.557: INFO: The status of Pod dns-7037 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:02:36.396: INFO: The status of Pod dns-7037 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:02:38.396: INFO: The status of Pod dns-7037 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 17 11:02:38.398: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7037 PodName:dns-7037 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:02:38.398: INFO: >>> kubeConfig: /root/.kube/config I0817 11:02:38.496128 10 log.go:181] (0x4003974630) (0x4000358460) Create stream I0817 11:02:38.496920 10 log.go:181] (0x4003974630) (0x4000358460) Stream added, broadcasting: 1 I0817 11:02:38.520747 10 log.go:181] (0x4003974630) Reply frame received for 1 I0817 11:02:38.521572 10 log.go:181] (0x4003974630) (0x4000358780) Create stream I0817 11:02:38.521676 10 log.go:181] (0x4003974630) (0x4000358780) Stream added, broadcasting: 3 I0817 11:02:38.523360 10 log.go:181] (0x4003974630) Reply frame received for 3 I0817 11:02:38.523628 10 log.go:181] (0x4003974630) (0x4000358f00) Create stream I0817 11:02:38.523693 10 log.go:181] (0x4003974630) (0x4000358f00) Stream added, broadcasting: 5 I0817 11:02:38.524900 10 log.go:181] (0x4003974630) Reply frame received for 5 I0817 11:02:38.609711 10 log.go:181] (0x4003974630) Data frame received for 3 I0817 11:02:38.610284 10 log.go:181] (0x4000358780) (3) Data frame handling I0817 11:02:38.610746 10 log.go:181] (0x4000358780) (3) Data frame sent I0817 11:02:38.614753 10 log.go:181] (0x4003974630) Data frame received for 5 I0817 11:02:38.614846 10 log.go:181] (0x4000358f00) (5) Data frame handling I0817 11:02:38.615242 10 log.go:181] (0x4003974630) Data frame received for 3 I0817 11:02:38.615424 10 log.go:181] (0x4000358780) (3) Data frame handling I0817 11:02:38.616705 10 log.go:181] (0x4003974630) Data frame received for 1 I0817 11:02:38.616922 10 log.go:181] (0x4000358460) (1) Data frame handling I0817 11:02:38.617021 10 log.go:181] (0x4000358460) (1) Data frame sent I0817 11:02:38.617865 10 log.go:181] (0x4003974630) (0x4000358460) Stream removed, broadcasting: 1 I0817 11:02:38.620504 10 log.go:181] (0x4003974630) Go away received I0817 11:02:38.622445 10 log.go:181] (0x4003974630) (0x4000358460) Stream removed, broadcasting: 1 I0817 11:02:38.622775 10 log.go:181] (0x4003974630) (0x4000358780) Stream removed, broadcasting: 3 I0817 11:02:38.623151 10 log.go:181] (0x4003974630) (0x4000358f00) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 17 11:02:38.623: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7037 PodName:dns-7037 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:02:38.624: INFO: >>> kubeConfig: /root/.kube/config I0817 11:02:38.678214 10 log.go:181] (0x4001bf20b0) (0x4000b26500) Create stream I0817 11:02:38.678450 10 log.go:181] (0x4001bf20b0) (0x4000b26500) Stream added, broadcasting: 1 I0817 11:02:38.683433 10 log.go:181] (0x4001bf20b0) Reply frame received for 1 I0817 11:02:38.683563 10 log.go:181] (0x4001bf20b0) (0x4000b26640) Create stream I0817 11:02:38.683619 10 log.go:181] (0x4001bf20b0) (0x4000b26640) Stream added, broadcasting: 3 I0817 11:02:38.685110 10 log.go:181] (0x4001bf20b0) Reply frame received for 3 I0817 11:02:38.685291 10 log.go:181] (0x4001bf20b0) (0x40024beb40) Create stream I0817 11:02:38.685388 10 log.go:181] (0x4001bf20b0) (0x40024beb40) Stream added, broadcasting: 5 I0817 11:02:38.686816 10 log.go:181] (0x4001bf20b0) Reply frame received for 5 I0817 11:02:38.768945 10 log.go:181] (0x4001bf20b0) Data frame received for 3 I0817 11:02:38.769084 10 log.go:181] (0x4000b26640) (3) Data frame handling I0817 11:02:38.769183 10 log.go:181] (0x4000b26640) (3) Data frame sent I0817 11:02:38.770062 10 log.go:181] (0x4001bf20b0) Data frame received for 5 I0817 11:02:38.770174 10 log.go:181] (0x40024beb40) (5) Data frame handling I0817 11:02:38.770337 10 log.go:181] (0x4001bf20b0) Data frame received for 3 I0817 11:02:38.770454 10 log.go:181] (0x4000b26640) (3) Data frame handling I0817 11:02:38.771726 10 log.go:181] (0x4001bf20b0) Data frame received for 1 I0817 11:02:38.771873 10 log.go:181] (0x4000b26500) (1) Data frame handling I0817 11:02:38.771953 10 log.go:181] (0x4000b26500) (1) Data frame sent I0817 11:02:38.772039 10 log.go:181] (0x4001bf20b0) (0x4000b26500) Stream removed, broadcasting: 1 I0817 11:02:38.772121 10 log.go:181] (0x4001bf20b0) Go away received I0817 11:02:38.772618 10 log.go:181] (0x4001bf20b0) (0x4000b26500) Stream removed, broadcasting: 1 I0817 11:02:38.772853 10 log.go:181] (0x4001bf20b0) (0x4000b26640) Stream removed, broadcasting: 3 I0817 11:02:38.772944 10 log.go:181] (0x4001bf20b0) (0x40024beb40) Stream removed, broadcasting: 5 Aug 17 11:02:38.773: INFO: Deleting pod dns-7037... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:02:38.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7037" for this suite. • [SLOW TEST:7.124 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":14,"skipped":285,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:02:39.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 17 11:02:39.810: INFO: Waiting up to 5m0s for pod "downward-api-9e4d8523-d63c-45bd-b140-4c65e9aab328" in namespace "downward-api-6539" to be "Succeeded or Failed" Aug 17 11:02:40.091: INFO: Pod "downward-api-9e4d8523-d63c-45bd-b140-4c65e9aab328": Phase="Pending", Reason="", readiness=false. Elapsed: 280.921737ms Aug 17 11:02:42.097: INFO: Pod "downward-api-9e4d8523-d63c-45bd-b140-4c65e9aab328": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287504368s Aug 17 11:02:44.103: INFO: Pod "downward-api-9e4d8523-d63c-45bd-b140-4c65e9aab328": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293607545s STEP: Saw pod success Aug 17 11:02:44.104: INFO: Pod "downward-api-9e4d8523-d63c-45bd-b140-4c65e9aab328" satisfied condition "Succeeded or Failed" Aug 17 11:02:44.108: INFO: Trying to get logs from node latest-worker pod downward-api-9e4d8523-d63c-45bd-b140-4c65e9aab328 container dapi-container: STEP: delete the pod Aug 17 11:02:44.140: INFO: Waiting for pod downward-api-9e4d8523-d63c-45bd-b140-4c65e9aab328 to disappear Aug 17 11:02:44.153: INFO: Pod downward-api-9e4d8523-d63c-45bd-b140-4c65e9aab328 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:02:44.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6539" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":15,"skipped":288,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:02:44.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-79e907fc-68ac-494e-b1df-77817c63a3c0 STEP: Creating a pod to test consume configMaps Aug 17 11:02:44.240: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-84b53022-7fbe-4c60-be9d-b2200da4035a" in namespace "projected-2580" to be "Succeeded or Failed" Aug 17 11:02:44.276: INFO: Pod "pod-projected-configmaps-84b53022-7fbe-4c60-be9d-b2200da4035a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.441899ms Aug 17 11:02:46.282: INFO: Pod "pod-projected-configmaps-84b53022-7fbe-4c60-be9d-b2200da4035a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042378363s Aug 17 11:02:48.289: INFO: Pod "pod-projected-configmaps-84b53022-7fbe-4c60-be9d-b2200da4035a": Phase="Running", Reason="", readiness=true. Elapsed: 4.04870054s Aug 17 11:02:50.338: INFO: Pod "pod-projected-configmaps-84b53022-7fbe-4c60-be9d-b2200da4035a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097538223s STEP: Saw pod success Aug 17 11:02:50.338: INFO: Pod "pod-projected-configmaps-84b53022-7fbe-4c60-be9d-b2200da4035a" satisfied condition "Succeeded or Failed" Aug 17 11:02:50.347: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-84b53022-7fbe-4c60-be9d-b2200da4035a container projected-configmap-volume-test: STEP: delete the pod Aug 17 11:02:50.443: INFO: Waiting for pod pod-projected-configmaps-84b53022-7fbe-4c60-be9d-b2200da4035a to disappear Aug 17 11:02:50.448: INFO: Pod pod-projected-configmaps-84b53022-7fbe-4c60-be9d-b2200da4035a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:02:50.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2580" for this suite. • [SLOW TEST:6.281 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:02:50.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7fb7fd18-90fe-48a1-befc-46de617bf521 STEP: Creating a pod to test consume secrets Aug 17 11:02:50.954: INFO: Waiting up to 5m0s for pod "pod-secrets-f26d4c36-5276-4b52-affb-75262877c312" in namespace "secrets-7618" to be "Succeeded or Failed" Aug 17 11:02:51.015: INFO: Pod "pod-secrets-f26d4c36-5276-4b52-affb-75262877c312": Phase="Pending", Reason="", readiness=false. Elapsed: 60.817778ms Aug 17 11:02:53.023: INFO: Pod "pod-secrets-f26d4c36-5276-4b52-affb-75262877c312": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068531549s Aug 17 11:02:55.030: INFO: Pod "pod-secrets-f26d4c36-5276-4b52-affb-75262877c312": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076163727s Aug 17 11:02:57.158: INFO: Pod "pod-secrets-f26d4c36-5276-4b52-affb-75262877c312": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203544764s Aug 17 11:02:59.167: INFO: Pod "pod-secrets-f26d4c36-5276-4b52-affb-75262877c312": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.212814506s STEP: Saw pod success Aug 17 11:02:59.167: INFO: Pod "pod-secrets-f26d4c36-5276-4b52-affb-75262877c312" satisfied condition "Succeeded or Failed" Aug 17 11:02:59.172: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-f26d4c36-5276-4b52-affb-75262877c312 container secret-volume-test: STEP: delete the pod Aug 17 11:02:59.241: INFO: Waiting for pod pod-secrets-f26d4c36-5276-4b52-affb-75262877c312 to disappear Aug 17 11:02:59.246: INFO: Pod pod-secrets-f26d4c36-5276-4b52-affb-75262877c312 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:02:59.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7618" for this suite. • [SLOW TEST:8.797 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":327,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:02:59.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-026f07a9-97bb-49e0-ac01-69a8299d6b04 STEP: Creating a pod to test consume configMaps Aug 17 11:02:59.376: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dacf4d98-8926-4d53-9ab5-feca48c2b27d" in namespace "projected-7129" to be "Succeeded or Failed" Aug 17 11:02:59.381: INFO: Pod "pod-projected-configmaps-dacf4d98-8926-4d53-9ab5-feca48c2b27d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.681087ms Aug 17 11:03:01.436: INFO: Pod "pod-projected-configmaps-dacf4d98-8926-4d53-9ab5-feca48c2b27d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060354823s Aug 17 11:03:03.443: INFO: Pod "pod-projected-configmaps-dacf4d98-8926-4d53-9ab5-feca48c2b27d": Phase="Running", Reason="", readiness=true. Elapsed: 4.067494934s Aug 17 11:03:05.624: INFO: Pod "pod-projected-configmaps-dacf4d98-8926-4d53-9ab5-feca48c2b27d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.248097319s STEP: Saw pod success Aug 17 11:03:05.624: INFO: Pod "pod-projected-configmaps-dacf4d98-8926-4d53-9ab5-feca48c2b27d" satisfied condition "Succeeded or Failed" Aug 17 11:03:05.631: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-dacf4d98-8926-4d53-9ab5-feca48c2b27d container projected-configmap-volume-test: STEP: delete the pod Aug 17 11:03:05.798: INFO: Waiting for pod pod-projected-configmaps-dacf4d98-8926-4d53-9ab5-feca48c2b27d to disappear Aug 17 11:03:05.850: INFO: Pod pod-projected-configmaps-dacf4d98-8926-4d53-9ab5-feca48c2b27d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:03:05.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7129" for this suite. • [SLOW TEST:7.058 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":18,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:03:06.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 17 11:03:06.492: INFO: Waiting up to 5m0s for pod "pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca" in namespace "emptydir-1537" to be "Succeeded or Failed" Aug 17 11:03:06.530: INFO: Pod "pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca": Phase="Pending", Reason="", readiness=false. Elapsed: 37.073681ms Aug 17 11:03:08.569: INFO: Pod "pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076597036s Aug 17 11:03:10.702: INFO: Pod "pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209748156s Aug 17 11:03:12.743: INFO: Pod "pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250945568s Aug 17 11:03:14.751: INFO: Pod "pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.258255462s STEP: Saw pod success Aug 17 11:03:14.751: INFO: Pod "pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca" satisfied condition "Succeeded or Failed" Aug 17 11:03:14.756: INFO: Trying to get logs from node latest-worker pod pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca container test-container: STEP: delete the pod Aug 17 11:03:14.857: INFO: Waiting for pod pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca to disappear Aug 17 11:03:14.861: INFO: Pod pod-c7e43be6-3e5a-4004-9f0a-e7b0ace211ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:03:14.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1537" for this suite. • [SLOW TEST:8.550 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":19,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:03:14.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 17 11:03:15.141: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 17 11:04:41.189: INFO: >>> kubeConfig: /root/.kube/config Aug 17 11:05:02.296: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:06:29.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3222" for this suite. • [SLOW TEST:194.685 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":20,"skipped":398,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:06:29.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 11:06:32.759: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 11:06:34.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259192, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259192, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259192, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259192, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 11:06:37.980: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:06:38.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5707" for this suite. STEP: Destroying namespace "webhook-5707-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.690 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":21,"skipped":398,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:06:38.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-5a29a5c8-ca9d-46f2-95cf-407fa1da2711 STEP: Creating a pod to test consume secrets Aug 17 11:06:38.612: INFO: Waiting up to 5m0s for pod "pod-secrets-ba90a41f-cc5d-49cc-8801-a7264ab76fff" in namespace "secrets-2866" to be "Succeeded or Failed" Aug 17 11:06:38.837: INFO: Pod "pod-secrets-ba90a41f-cc5d-49cc-8801-a7264ab76fff": Phase="Pending", Reason="", readiness=false. Elapsed: 225.128834ms Aug 17 11:06:40.845: INFO: Pod "pod-secrets-ba90a41f-cc5d-49cc-8801-a7264ab76fff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232976902s Aug 17 11:06:42.853: INFO: Pod "pod-secrets-ba90a41f-cc5d-49cc-8801-a7264ab76fff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240798496s Aug 17 11:06:44.860: INFO: Pod "pod-secrets-ba90a41f-cc5d-49cc-8801-a7264ab76fff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.248449807s STEP: Saw pod success Aug 17 11:06:44.860: INFO: Pod "pod-secrets-ba90a41f-cc5d-49cc-8801-a7264ab76fff" satisfied condition "Succeeded or Failed" Aug 17 11:06:44.865: INFO: Trying to get logs from node latest-worker pod pod-secrets-ba90a41f-cc5d-49cc-8801-a7264ab76fff container secret-volume-test: STEP: delete the pod Aug 17 11:06:44.916: INFO: Waiting for pod pod-secrets-ba90a41f-cc5d-49cc-8801-a7264ab76fff to disappear Aug 17 11:06:44.922: INFO: Pod pod-secrets-ba90a41f-cc5d-49cc-8801-a7264ab76fff no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:06:44.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2866" for this suite. • [SLOW TEST:6.681 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":22,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:06:44.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 17 11:06:45.036: INFO: Waiting up to 5m0s for pod "pod-c71b2d7b-252c-41ac-ac48-72feb963b07a" in namespace "emptydir-3170" to be "Succeeded or Failed" Aug 17 11:06:45.054: INFO: Pod "pod-c71b2d7b-252c-41ac-ac48-72feb963b07a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.178153ms Aug 17 11:06:47.061: INFO: Pod "pod-c71b2d7b-252c-41ac-ac48-72feb963b07a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025472024s Aug 17 11:06:49.069: INFO: Pod "pod-c71b2d7b-252c-41ac-ac48-72feb963b07a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032879787s Aug 17 11:06:51.084: INFO: Pod "pod-c71b2d7b-252c-41ac-ac48-72feb963b07a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04854185s STEP: Saw pod success Aug 17 11:06:51.084: INFO: Pod "pod-c71b2d7b-252c-41ac-ac48-72feb963b07a" satisfied condition "Succeeded or Failed" Aug 17 11:06:51.096: INFO: Trying to get logs from node latest-worker pod pod-c71b2d7b-252c-41ac-ac48-72feb963b07a container test-container: STEP: delete the pod Aug 17 11:06:51.461: INFO: Waiting for pod pod-c71b2d7b-252c-41ac-ac48-72feb963b07a to disappear Aug 17 11:06:51.671: INFO: Pod pod-c71b2d7b-252c-41ac-ac48-72feb963b07a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:06:51.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3170" for this suite. • [SLOW TEST:6.748 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":23,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:06:51.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:06:52.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6673" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":24,"skipped":453,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:06:52.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3792.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3792.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3792.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3792.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 11:07:05.855: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:05.868: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:05.873: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:05.877: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:05.888: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:05.891: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:05.895: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:05.899: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:05.907: INFO: Lookups using dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local] Aug 17 11:07:10.915: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:10.922: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:10.928: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:10.933: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:10.945: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:10.949: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:11.153: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:11.158: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:11.210: INFO: Lookups using dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local] Aug 17 11:07:16.028: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:16.033: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:16.037: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:16.043: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:18.549: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:18.721: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:18.768: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:19.154: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:19.306: INFO: Lookups using dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local] Aug 17 11:07:20.913: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:20.918: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:20.922: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:20.925: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:20.971: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:20.975: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:20.978: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:20.982: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:20.990: INFO: Lookups using dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local] Aug 17 11:07:25.913: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:25.917: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:25.921: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:25.925: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:25.936: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:25.939: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:25.942: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:25.945: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:25.951: INFO: Lookups using dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local] Aug 17 11:07:31.591: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:31.654: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:31.658: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:31.664: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:31.675: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:31.679: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:31.682: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:31.686: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local from pod dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e: the server could not find the requested resource (get pods dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e) Aug 17 11:07:31.788: INFO: Lookups using dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3792.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3792.svc.cluster.local jessie_udp@dns-test-service-2.dns-3792.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3792.svc.cluster.local] Aug 17 11:07:35.969: INFO: DNS probes using dns-3792/dns-test-0f99b6b3-5be4-4dc7-94c6-a47e4c86dd5e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:07:37.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3792" for this suite. • [SLOW TEST:45.096 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":25,"skipped":458,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:07:37.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-583 STEP: creating service affinity-clusterip in namespace services-583 STEP: creating replication controller affinity-clusterip in namespace services-583 I0817 11:07:39.152326 10 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-583, replica count: 3 I0817 11:07:42.206700 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:07:45.208844 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:07:48.209753 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:07:51.211376 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 11:07:51.284: INFO: Creating new exec pod Aug 17 11:07:58.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-583 execpod-affinityf9k6l -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Aug 17 11:08:00.301: INFO: stderr: "I0817 11:08:00.178083 115 log.go:181] (0x4000d02fd0) (0x4000b0c780) Create stream\nI0817 11:08:00.180431 115 log.go:181] (0x4000d02fd0) (0x4000b0c780) Stream added, broadcasting: 1\nI0817 11:08:00.202553 115 log.go:181] (0x4000d02fd0) Reply frame received for 1\nI0817 11:08:00.203180 115 log.go:181] (0x4000d02fd0) (0x4000b0c000) Create stream\nI0817 11:08:00.203247 115 log.go:181] (0x4000d02fd0) (0x4000b0c000) Stream added, broadcasting: 3\nI0817 11:08:00.204622 115 log.go:181] (0x4000d02fd0) Reply frame received for 3\nI0817 11:08:00.204902 115 log.go:181] (0x4000d02fd0) (0x4000c2e000) Create stream\nI0817 11:08:00.204965 115 log.go:181] (0x4000d02fd0) (0x4000c2e000) Stream added, broadcasting: 5\nI0817 11:08:00.206074 115 log.go:181] (0x4000d02fd0) Reply frame received for 5\nI0817 11:08:00.281936 115 log.go:181] (0x4000d02fd0) Data frame received for 5\nI0817 11:08:00.282173 115 log.go:181] (0x4000d02fd0) Data frame received for 3\nI0817 11:08:00.282454 115 log.go:181] (0x4000c2e000) (5) Data frame handling\nI0817 11:08:00.282672 115 log.go:181] (0x4000b0c000) (3) Data frame handling\nI0817 11:08:00.283762 115 log.go:181] (0x4000d02fd0) Data frame received for 1\nI0817 11:08:00.283848 115 log.go:181] (0x4000b0c780) (1) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0817 11:08:00.284219 115 log.go:181] (0x4000c2e000) (5) Data frame sent\nI0817 11:08:00.284393 115 log.go:181] (0x4000d02fd0) Data frame received for 5\nI0817 11:08:00.284454 115 log.go:181] (0x4000c2e000) (5) Data frame handling\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0817 11:08:00.285641 115 log.go:181] (0x4000c2e000) (5) Data frame sent\nI0817 11:08:00.285730 115 log.go:181] (0x4000d02fd0) Data frame received for 5\nI0817 11:08:00.285812 115 log.go:181] (0x4000b0c780) (1) Data frame sent\nI0817 11:08:00.285890 115 log.go:181] (0x4000c2e000) (5) Data frame handling\nI0817 11:08:00.287481 115 log.go:181] (0x4000d02fd0) (0x4000b0c780) Stream removed, broadcasting: 1\nI0817 11:08:00.289443 115 log.go:181] (0x4000d02fd0) Go away received\nI0817 11:08:00.292922 115 log.go:181] (0x4000d02fd0) (0x4000b0c780) Stream removed, broadcasting: 1\nI0817 11:08:00.293200 115 log.go:181] (0x4000d02fd0) (0x4000b0c000) Stream removed, broadcasting: 3\nI0817 11:08:00.293413 115 log.go:181] (0x4000d02fd0) (0x4000c2e000) Stream removed, broadcasting: 5\n" Aug 17 11:08:00.303: INFO: stdout: "" Aug 17 11:08:00.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-583 execpod-affinityf9k6l -- /bin/sh -x -c nc -zv -t -w 2 10.108.7.166 80' Aug 17 11:08:01.927: INFO: stderr: "I0817 11:08:01.799760 135 log.go:181] (0x400054e000) (0x4000a34820) Create stream\nI0817 11:08:01.802269 135 log.go:181] (0x400054e000) (0x4000a34820) Stream added, broadcasting: 1\nI0817 11:08:01.811714 135 log.go:181] (0x400054e000) Reply frame received for 1\nI0817 11:08:01.812337 135 log.go:181] (0x400054e000) (0x40005b2500) Create stream\nI0817 11:08:01.812413 135 log.go:181] (0x400054e000) (0x40005b2500) Stream added, broadcasting: 3\nI0817 11:08:01.813777 135 log.go:181] (0x400054e000) Reply frame received for 3\nI0817 11:08:01.814033 135 log.go:181] (0x400054e000) (0x400062a000) Create stream\nI0817 11:08:01.814091 135 log.go:181] (0x400054e000) (0x400062a000) Stream added, broadcasting: 5\nI0817 11:08:01.815305 135 log.go:181] (0x400054e000) Reply frame received for 5\nI0817 11:08:01.886230 135 log.go:181] (0x400054e000) Data frame received for 5\nI0817 11:08:01.886810 135 log.go:181] (0x400054e000) Data frame received for 3\nI0817 11:08:01.886975 135 log.go:181] (0x40005b2500) (3) Data frame handling\nI0817 11:08:01.887488 135 log.go:181] (0x400054e000) Data frame received for 1\nI0817 11:08:01.887608 135 log.go:181] (0x4000a34820) (1) Data frame handling\nI0817 11:08:01.887754 135 log.go:181] (0x400062a000) (5) Data frame handling\nI0817 11:08:01.888973 135 log.go:181] (0x400062a000) (5) Data frame sent\nI0817 11:08:01.889266 135 log.go:181] (0x4000a34820) (1) Data frame sent\n+ nc -zv -t -w 2 10.108.7.166 80\nConnection to 10.108.7.166 80 port [tcp/http] succeeded!\nI0817 11:08:01.890142 135 log.go:181] (0x400054e000) Data frame received for 5\nI0817 11:08:01.890216 135 log.go:181] (0x400062a000) (5) Data frame handling\nI0817 11:08:01.908505 135 log.go:181] (0x400054e000) (0x4000a34820) Stream removed, broadcasting: 1\nI0817 11:08:01.909236 135 log.go:181] (0x400054e000) Go away received\nI0817 11:08:01.916119 135 log.go:181] (0x400054e000) (0x4000a34820) Stream removed, broadcasting: 1\nI0817 11:08:01.916405 135 log.go:181] (0x400054e000) (0x40005b2500) Stream removed, broadcasting: 3\nI0817 11:08:01.916558 135 log.go:181] (0x400054e000) (0x400062a000) Stream removed, broadcasting: 5\n" Aug 17 11:08:01.928: INFO: stdout: "" Aug 17 11:08:01.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-583 execpod-affinityf9k6l -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.7.166:80/ ; done' Aug 17 11:08:03.613: INFO: stderr: "I0817 11:08:03.431637 156 log.go:181] (0x4000144bb0) (0x4000980280) Create stream\nI0817 11:08:03.436418 156 log.go:181] (0x4000144bb0) (0x4000980280) Stream added, broadcasting: 1\nI0817 11:08:03.449285 156 log.go:181] (0x4000144bb0) Reply frame received for 1\nI0817 11:08:03.450301 156 log.go:181] (0x4000144bb0) (0x4000980320) Create stream\nI0817 11:08:03.450391 156 log.go:181] (0x4000144bb0) (0x4000980320) Stream added, broadcasting: 3\nI0817 11:08:03.452141 156 log.go:181] (0x4000144bb0) Reply frame received for 3\nI0817 11:08:03.452382 156 log.go:181] (0x4000144bb0) (0x4000698f00) Create stream\nI0817 11:08:03.452443 156 log.go:181] (0x4000144bb0) (0x4000698f00) Stream added, broadcasting: 5\nI0817 11:08:03.453826 156 log.go:181] (0x4000144bb0) Reply frame received for 5\nI0817 11:08:03.515812 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.516164 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.516278 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.516386 156 log.go:181] (0x4000698f00) (5) Data frame handling\nI0817 11:08:03.517050 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.517538 156 log.go:181] (0x4000698f00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.518515 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.518596 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.518658 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.519357 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.519430 156 log.go:181] (0x4000698f00) (5) Data frame handling\nI0817 11:08:03.519482 156 log.go:181] (0x4000698f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.519578 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.519669 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.519765 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.523119 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.523186 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.523273 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.524167 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.524243 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\nI0817 11:08:03.524310 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.524411 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.524491 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.524550 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.524636 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.524708 156 log.go:181] (0x4000698f00) (5) Data frame handling\nI0817 11:08:03.524841 156 log.go:181] (0x4000698f00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.533613 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.533715 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.533783 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.533850 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.533902 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.533963 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.534019 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.534115 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.534201 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.534873 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.534957 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.535051 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.535384 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.535453 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.535501 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.535545 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.535586 156 log.go:181] (0x4000698f00) (5) Data frame handling\nI0817 11:08:03.535636 156 log.go:181] (0x4000698f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.539819 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.539911 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.540012 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.540452 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.540597 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0817 11:08:03.540733 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.540824 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.540927 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.541069 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.541184 156 log.go:181] (0x4000698f00) (5) Data frame handling\nI0817 11:08:03.541291 156 log.go:181] (0x4000698f00) (5) Data frame sent\n 2 http://10.108.7.166:80/\nI0817 11:08:03.541423 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.545974 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.546038 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.546124 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.546620 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.546700 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.546774 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.546866 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.546926 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.546983 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.549805 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.549892 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.549984 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.550498 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.550588 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.550663 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.550743 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.550812 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.550879 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.554749 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.554843 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.554959 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.555191 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.555276 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.555348 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.555405 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.555466 156 log.go:181] (0x4000698f00) (5) Data frame handling\nI0817 11:08:03.555538 156 log.go:181] (0x4000698f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.560949 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.561021 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.561090 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.561582 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.561662 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.561735 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.561816 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.561885 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.561959 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.564640 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.564689 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.564826 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.565257 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.565328 156 log.go:181] (0x4000698f00) (5) Data frame handling\nI0817 11:08:03.565425 156 log.go:181] (0x4000698f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.565501 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.565560 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.565629 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.570079 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.570149 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.570220 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.570611 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.570701 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.570792 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.570887 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.570971 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.571036 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.573726 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.573814 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.573917 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.574188 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.574284 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.574361 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.574437 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.574527 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.574623 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.579187 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.579302 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.579415 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.580045 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.580161 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.580279 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.580413 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.580516 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.580626 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.584969 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.585110 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.585242 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.585643 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.585740 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.585827 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.585933 156 log.go:181] (0x4000698f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.586026 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.586115 156 log.go:181] (0x4000698f00) (5) Data frame sent\nI0817 11:08:03.589914 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.590028 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.590141 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.590325 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.590407 156 log.go:181] (0x4000698f00) (5) Data frame handling\nI0817 11:08:03.590491 156 log.go:181] (0x4000698f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.7.166:80/\nI0817 11:08:03.590568 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.590633 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.590705 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.594345 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.594423 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.594505 156 log.go:181] (0x4000980320) (3) Data frame sent\nI0817 11:08:03.595122 156 log.go:181] (0x4000144bb0) Data frame received for 3\nI0817 11:08:03.595242 156 log.go:181] (0x4000980320) (3) Data frame handling\nI0817 11:08:03.595355 156 log.go:181] (0x4000144bb0) Data frame received for 5\nI0817 11:08:03.595469 156 log.go:181] (0x4000698f00) (5) Data frame handling\nI0817 11:08:03.596837 156 log.go:181] (0x4000144bb0) Data frame received for 1\nI0817 11:08:03.596982 156 log.go:181] (0x4000980280) (1) Data frame handling\nI0817 11:08:03.597136 156 log.go:181] (0x4000980280) (1) Data frame sent\nI0817 11:08:03.598583 156 log.go:181] (0x4000144bb0) (0x4000980280) Stream removed, broadcasting: 1\nI0817 11:08:03.601280 156 log.go:181] (0x4000144bb0) Go away received\nI0817 11:08:03.604929 156 log.go:181] (0x4000144bb0) (0x4000980280) Stream removed, broadcasting: 1\nI0817 11:08:03.605254 156 log.go:181] (0x4000144bb0) (0x4000980320) Stream removed, broadcasting: 3\nI0817 11:08:03.605535 156 log.go:181] (0x4000144bb0) (0x4000698f00) Stream removed, broadcasting: 5\n" Aug 17 11:08:03.618: INFO: stdout: "\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv\naffinity-clusterip-q6djv" Aug 17 11:08:03.618: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.618: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.618: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.618: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.618: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.618: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.618: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.618: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.618: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.619: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.619: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.619: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.619: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.619: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.619: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.619: INFO: Received response from host: affinity-clusterip-q6djv Aug 17 11:08:03.619: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-583, will wait for the garbage collector to delete the pods Aug 17 11:08:04.277: INFO: Deleting ReplicationController affinity-clusterip took: 133.112817ms Aug 17 11:08:04.579: INFO: Terminating ReplicationController affinity-clusterip pods took: 301.924588ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:08:20.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-583" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:43.145 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":26,"skipped":468,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:08:20.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-63ed29ef-db51-403c-92e4-b08d68c4bec8 in namespace container-probe-4972 Aug 17 11:08:27.930: INFO: Started pod busybox-63ed29ef-db51-403c-92e4-b08d68c4bec8 in namespace container-probe-4972 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 11:08:27.936: INFO: Initial restart count of pod busybox-63ed29ef-db51-403c-92e4-b08d68c4bec8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:12:29.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4972" for this suite. • [SLOW TEST:249.289 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:12:30.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2077 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 17 11:12:30.279: INFO: Found 0 stateful pods, waiting for 3 Aug 17 11:12:40.289: INFO: Found 2 stateful pods, waiting for 3 Aug 17 11:12:50.293: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 11:12:50.293: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 11:12:50.293: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 17 11:12:50.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2077 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 11:12:57.666: INFO: stderr: "I0817 11:12:57.482528 176 log.go:181] (0x40001a8370) (0x4000632000) Create stream\nI0817 11:12:57.488261 176 log.go:181] (0x40001a8370) (0x4000632000) Stream added, broadcasting: 1\nI0817 11:12:57.502208 176 log.go:181] (0x40001a8370) Reply frame received for 1\nI0817 11:12:57.503056 176 log.go:181] (0x40001a8370) (0x4000af80a0) Create stream\nI0817 11:12:57.503134 176 log.go:181] (0x40001a8370) (0x4000af80a0) Stream added, broadcasting: 3\nI0817 11:12:57.504651 176 log.go:181] (0x40001a8370) Reply frame received for 3\nI0817 11:12:57.504914 176 log.go:181] (0x40001a8370) (0x4000cc6320) Create stream\nI0817 11:12:57.504974 176 log.go:181] (0x40001a8370) (0x4000cc6320) Stream added, broadcasting: 5\nI0817 11:12:57.505979 176 log.go:181] (0x40001a8370) Reply frame received for 5\nI0817 11:12:57.578933 176 log.go:181] (0x40001a8370) Data frame received for 5\nI0817 11:12:57.579241 176 log.go:181] (0x4000cc6320) (5) Data frame handling\nI0817 11:12:57.579999 176 log.go:181] (0x4000cc6320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 11:12:57.636288 176 log.go:181] (0x40001a8370) Data frame received for 3\nI0817 11:12:57.636534 176 log.go:181] (0x40001a8370) Data frame received for 5\nI0817 11:12:57.636855 176 log.go:181] (0x4000cc6320) (5) Data frame handling\nI0817 11:12:57.637195 176 log.go:181] (0x4000af80a0) (3) Data frame handling\nI0817 11:12:57.637370 176 log.go:181] (0x4000af80a0) (3) Data frame sent\nI0817 11:12:57.637480 176 log.go:181] (0x40001a8370) Data frame received for 3\nI0817 11:12:57.637587 176 log.go:181] (0x4000af80a0) (3) Data frame handling\nI0817 11:12:57.638711 176 log.go:181] (0x40001a8370) Data frame received for 1\nI0817 11:12:57.638785 176 log.go:181] (0x4000632000) (1) Data frame handling\nI0817 11:12:57.638854 176 log.go:181] (0x4000632000) (1) Data frame sent\nI0817 11:12:57.641094 176 log.go:181] (0x40001a8370) (0x4000632000) Stream removed, broadcasting: 1\nI0817 11:12:57.645194 176 log.go:181] (0x40001a8370) Go away received\nI0817 11:12:57.648904 176 log.go:181] (0x40001a8370) (0x4000632000) Stream removed, broadcasting: 1\nI0817 11:12:57.649519 176 log.go:181] (0x40001a8370) (0x4000af80a0) Stream removed, broadcasting: 3\nI0817 11:12:57.650125 176 log.go:181] (0x40001a8370) (0x4000cc6320) Stream removed, broadcasting: 5\n" Aug 17 11:12:57.666: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 11:12:57.667: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 17 11:13:07.718: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 17 11:13:17.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2077 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 11:13:19.426: INFO: stderr: "I0817 11:13:19.304884 197 log.go:181] (0x40006b20b0) (0x40003ac000) Create stream\nI0817 11:13:19.307355 197 log.go:181] (0x40006b20b0) (0x40003ac000) Stream added, broadcasting: 1\nI0817 11:13:19.318572 197 log.go:181] (0x40006b20b0) Reply frame received for 1\nI0817 11:13:19.319137 197 log.go:181] (0x40006b20b0) (0x40003ac0a0) Create stream\nI0817 11:13:19.319196 197 log.go:181] (0x40006b20b0) (0x40003ac0a0) Stream added, broadcasting: 3\nI0817 11:13:19.320473 197 log.go:181] (0x40006b20b0) Reply frame received for 3\nI0817 11:13:19.320686 197 log.go:181] (0x40006b20b0) (0x40005cc000) Create stream\nI0817 11:13:19.320792 197 log.go:181] (0x40006b20b0) (0x40005cc000) Stream added, broadcasting: 5\nI0817 11:13:19.321751 197 log.go:181] (0x40006b20b0) Reply frame received for 5\nI0817 11:13:19.404645 197 log.go:181] (0x40006b20b0) Data frame received for 3\nI0817 11:13:19.405170 197 log.go:181] (0x40003ac0a0) (3) Data frame handling\nI0817 11:13:19.405587 197 log.go:181] (0x40006b20b0) Data frame received for 5\nI0817 11:13:19.405816 197 log.go:181] (0x40005cc000) (5) Data frame handling\nI0817 11:13:19.406012 197 log.go:181] (0x40003ac0a0) (3) Data frame sent\nI0817 11:13:19.406214 197 log.go:181] (0x40006b20b0) Data frame received for 1\nI0817 11:13:19.406309 197 log.go:181] (0x40003ac000) (1) Data frame handling\nI0817 11:13:19.406399 197 log.go:181] (0x40003ac000) (1) Data frame sent\nI0817 11:13:19.406845 197 log.go:181] (0x40006b20b0) Data frame received for 3\nI0817 11:13:19.406916 197 log.go:181] (0x40003ac0a0) (3) Data frame handling\nI0817 11:13:19.407305 197 log.go:181] (0x40005cc000) (5) Data frame sent\nI0817 11:13:19.407396 197 log.go:181] (0x40006b20b0) Data frame received for 5\nI0817 11:13:19.407477 197 log.go:181] (0x40005cc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 11:13:19.410572 197 log.go:181] (0x40006b20b0) (0x40003ac000) Stream removed, broadcasting: 1\nI0817 11:13:19.411854 197 log.go:181] (0x40006b20b0) Go away received\nI0817 11:13:19.415633 197 log.go:181] (0x40006b20b0) (0x40003ac000) Stream removed, broadcasting: 1\nI0817 11:13:19.415913 197 log.go:181] (0x40006b20b0) (0x40003ac0a0) Stream removed, broadcasting: 3\nI0817 11:13:19.416103 197 log.go:181] (0x40006b20b0) (0x40005cc000) Stream removed, broadcasting: 5\n" Aug 17 11:13:19.427: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 11:13:19.427: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 11:13:19.558: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:13:19.558: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:19.559: INFO: Waiting for Pod statefulset-2077/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:19.559: INFO: Waiting for Pod statefulset-2077/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:30.979: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:13:30.979: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:30.979: INFO: Waiting for Pod statefulset-2077/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:30.979: INFO: Waiting for Pod statefulset-2077/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:39.701: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:13:39.702: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:39.702: INFO: Waiting for Pod statefulset-2077/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:50.054: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:13:50.055: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:50.055: INFO: Waiting for Pod statefulset-2077/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:13:59.573: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:13:59.573: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:14:09.620: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:14:09.620: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Aug 17 11:14:19.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2077 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 11:14:21.485: INFO: stderr: "I0817 11:14:21.285038 217 log.go:181] (0x4000932000) (0x40001a25a0) Create stream\nI0817 11:14:21.292027 217 log.go:181] (0x4000932000) (0x40001a25a0) Stream added, broadcasting: 1\nI0817 11:14:21.301130 217 log.go:181] (0x4000932000) Reply frame received for 1\nI0817 11:14:21.301648 217 log.go:181] (0x4000932000) (0x4000c12280) Create stream\nI0817 11:14:21.301719 217 log.go:181] (0x4000932000) (0x4000c12280) Stream added, broadcasting: 3\nI0817 11:14:21.303251 217 log.go:181] (0x4000932000) Reply frame received for 3\nI0817 11:14:21.303704 217 log.go:181] (0x4000932000) (0x40001a30e0) Create stream\nI0817 11:14:21.303805 217 log.go:181] (0x4000932000) (0x40001a30e0) Stream added, broadcasting: 5\nI0817 11:14:21.305204 217 log.go:181] (0x4000932000) Reply frame received for 5\nI0817 11:14:21.393327 217 log.go:181] (0x4000932000) Data frame received for 5\nI0817 11:14:21.393547 217 log.go:181] (0x40001a30e0) (5) Data frame handling\nI0817 11:14:21.394017 217 log.go:181] (0x40001a30e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 11:14:21.463932 217 log.go:181] (0x4000932000) Data frame received for 3\nI0817 11:14:21.464123 217 log.go:181] (0x4000932000) Data frame received for 5\nI0817 11:14:21.464251 217 log.go:181] (0x40001a30e0) (5) Data frame handling\nI0817 11:14:21.464493 217 log.go:181] (0x4000c12280) (3) Data frame handling\nI0817 11:14:21.464696 217 log.go:181] (0x4000c12280) (3) Data frame sent\nI0817 11:14:21.464964 217 log.go:181] (0x4000932000) Data frame received for 3\nI0817 11:14:21.465133 217 log.go:181] (0x4000c12280) (3) Data frame handling\nI0817 11:14:21.466457 217 log.go:181] (0x4000932000) Data frame received for 1\nI0817 11:14:21.466543 217 log.go:181] (0x40001a25a0) (1) Data frame handling\nI0817 11:14:21.466651 217 log.go:181] (0x40001a25a0) (1) Data frame sent\nI0817 11:14:21.467605 217 log.go:181] (0x4000932000) (0x40001a25a0) Stream removed, broadcasting: 1\nI0817 11:14:21.471396 217 log.go:181] (0x4000932000) Go away received\nI0817 11:14:21.474970 217 log.go:181] (0x4000932000) (0x40001a25a0) Stream removed, broadcasting: 1\nI0817 11:14:21.475357 217 log.go:181] (0x4000932000) (0x4000c12280) Stream removed, broadcasting: 3\nI0817 11:14:21.475588 217 log.go:181] (0x4000932000) (0x40001a30e0) Stream removed, broadcasting: 5\n" Aug 17 11:14:21.486: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 11:14:21.486: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 11:14:31.535: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 17 11:14:41.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2077 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 11:14:43.158: INFO: stderr: "I0817 11:14:43.032822 237 log.go:181] (0x40001ccd10) (0x40000277c0) Create stream\nI0817 11:14:43.037545 237 log.go:181] (0x40001ccd10) (0x40000277c0) Stream added, broadcasting: 1\nI0817 11:14:43.049524 237 log.go:181] (0x40001ccd10) Reply frame received for 1\nI0817 11:14:43.050196 237 log.go:181] (0x40001ccd10) (0x400069a0a0) Create stream\nI0817 11:14:43.050278 237 log.go:181] (0x40001ccd10) (0x400069a0a0) Stream added, broadcasting: 3\nI0817 11:14:43.052113 237 log.go:181] (0x40001ccd10) Reply frame received for 3\nI0817 11:14:43.052599 237 log.go:181] (0x40001ccd10) (0x4000027860) Create stream\nI0817 11:14:43.052701 237 log.go:181] (0x40001ccd10) (0x4000027860) Stream added, broadcasting: 5\nI0817 11:14:43.054380 237 log.go:181] (0x40001ccd10) Reply frame received for 5\nI0817 11:14:43.136920 237 log.go:181] (0x40001ccd10) Data frame received for 3\nI0817 11:14:43.137292 237 log.go:181] (0x400069a0a0) (3) Data frame handling\nI0817 11:14:43.137822 237 log.go:181] (0x400069a0a0) (3) Data frame sent\nI0817 11:14:43.138118 237 log.go:181] (0x40001ccd10) Data frame received for 5\nI0817 11:14:43.138261 237 log.go:181] (0x4000027860) (5) Data frame handling\nI0817 11:14:43.138373 237 log.go:181] (0x4000027860) (5) Data frame sent\nI0817 11:14:43.138452 237 log.go:181] (0x40001ccd10) Data frame received for 3\nI0817 11:14:43.138531 237 log.go:181] (0x400069a0a0) (3) Data frame handling\nI0817 11:14:43.138711 237 log.go:181] (0x40001ccd10) Data frame received for 5\nI0817 11:14:43.138808 237 log.go:181] (0x4000027860) (5) Data frame handling\nI0817 11:14:43.139745 237 log.go:181] (0x40001ccd10) Data frame received for 1\nI0817 11:14:43.139836 237 log.go:181] (0x40000277c0) (1) Data frame handling\nI0817 11:14:43.139910 237 log.go:181] (0x40000277c0) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 11:14:43.141901 237 log.go:181] (0x40001ccd10) (0x40000277c0) Stream removed, broadcasting: 1\nI0817 11:14:43.143342 237 log.go:181] (0x40001ccd10) Go away received\nI0817 11:14:43.147373 237 log.go:181] (0x40001ccd10) (0x40000277c0) Stream removed, broadcasting: 1\nI0817 11:14:43.147700 237 log.go:181] (0x40001ccd10) (0x400069a0a0) Stream removed, broadcasting: 3\nI0817 11:14:43.147893 237 log.go:181] (0x40001ccd10) (0x4000027860) Stream removed, broadcasting: 5\n" Aug 17 11:14:43.159: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 11:14:43.159: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 11:14:53.193: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:14:53.194: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 17 11:14:53.194: INFO: Waiting for Pod statefulset-2077/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 17 11:15:03.207: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:15:03.208: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 17 11:15:03.208: INFO: Waiting for Pod statefulset-2077/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 17 11:15:13.207: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:15:13.207: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 17 11:15:23.279: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update Aug 17 11:15:23.279: INFO: Waiting for Pod statefulset-2077/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 17 11:15:33.205: INFO: Waiting for StatefulSet statefulset-2077/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 17 11:15:43.299: INFO: Deleting all statefulset in ns statefulset-2077 Aug 17 11:15:43.307: INFO: Scaling statefulset ss2 to 0 Aug 17 11:16:03.441: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 11:16:03.446: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:16:03.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2077" for this suite. • [SLOW TEST:213.478 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":28,"skipped":526,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:16:03.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 17 11:16:03.560: INFO: >>> kubeConfig: /root/.kube/config Aug 17 11:16:24.491: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:17:50.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8639" for this suite. • [SLOW TEST:106.674 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":29,"skipped":531,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:17:50.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-c8981784-927b-4a9d-9145-b967f287a8d4 STEP: Creating a pod to test consume secrets Aug 17 11:17:50.325: INFO: Waiting up to 5m0s for pod "pod-secrets-e1f44bd7-4202-4d60-8e62-ab984e87b441" in namespace "secrets-6062" to be "Succeeded or Failed" Aug 17 11:17:50.333: INFO: Pod "pod-secrets-e1f44bd7-4202-4d60-8e62-ab984e87b441": Phase="Pending", Reason="", readiness=false. Elapsed: 7.436476ms Aug 17 11:17:52.341: INFO: Pod "pod-secrets-e1f44bd7-4202-4d60-8e62-ab984e87b441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015748716s Aug 17 11:17:54.347: INFO: Pod "pod-secrets-e1f44bd7-4202-4d60-8e62-ab984e87b441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021685733s STEP: Saw pod success Aug 17 11:17:54.347: INFO: Pod "pod-secrets-e1f44bd7-4202-4d60-8e62-ab984e87b441" satisfied condition "Succeeded or Failed" Aug 17 11:17:54.350: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e1f44bd7-4202-4d60-8e62-ab984e87b441 container secret-volume-test: STEP: delete the pod Aug 17 11:17:54.394: INFO: Waiting for pod pod-secrets-e1f44bd7-4202-4d60-8e62-ab984e87b441 to disappear Aug 17 11:17:54.398: INFO: Pod pod-secrets-e1f44bd7-4202-4d60-8e62-ab984e87b441 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:17:54.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6062" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":548,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:17:54.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-330b40f4-31df-409d-a588-af76c1134c6d STEP: Creating secret with name s-test-opt-upd-8d407019-04f8-4b24-be95-38ce7c8a9904 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-330b40f4-31df-409d-a588-af76c1134c6d STEP: Updating secret s-test-opt-upd-8d407019-04f8-4b24-be95-38ce7c8a9904 STEP: Creating secret with name s-test-opt-create-1ff9419a-27c9-46d1-90ca-9455447a4bc2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:19:27.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3279" for this suite. • [SLOW TEST:92.759 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":31,"skipped":555,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:19:27.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 11:19:29.783: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 11:19:31.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259969, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259969, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259969, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259969, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 11:19:33.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259969, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259969, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259969, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733259969, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 11:19:36.861: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:19:37.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-399" for this suite. STEP: Destroying namespace "webhook-399-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.936 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":32,"skipped":559,"failed":0} [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:19:37.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2276 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2276 to expose endpoints map[] Aug 17 11:19:37.275: INFO: successfully validated that service endpoint-test2 in namespace services-2276 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2276 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2276 to expose endpoints map[pod1:[80]] Aug 17 11:19:40.423: INFO: successfully validated that service endpoint-test2 in namespace services-2276 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2276 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2276 to expose endpoints map[pod1:[80] pod2:[80]] Aug 17 11:19:44.561: INFO: successfully validated that service endpoint-test2 in namespace services-2276 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2276 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2276 to expose endpoints map[pod2:[80]] Aug 17 11:19:44.615: INFO: successfully validated that service endpoint-test2 in namespace services-2276 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2276 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2276 to expose endpoints map[] Aug 17 11:19:44.679: INFO: successfully validated that service endpoint-test2 in namespace services-2276 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:19:45.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2276" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:8.285 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":33,"skipped":559,"failed":0} SSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:19:45.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:19:47.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3421" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":34,"skipped":562,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:19:47.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0817 11:19:59.339482 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 11:21:01.506: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:21:01.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5094" for this suite. • [SLOW TEST:73.650 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":35,"skipped":562,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:21:01.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:21:01.620: INFO: Creating deployment "webserver-deployment" Aug 17 11:21:01.628: INFO: Waiting for observed generation 1 Aug 17 11:21:04.460: INFO: Waiting for all required pods to come up Aug 17 11:21:05.084: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 17 11:21:17.480: INFO: Waiting for deployment "webserver-deployment" to complete Aug 17 11:21:17.491: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 17 11:21:17.506: INFO: Updating deployment webserver-deployment Aug 17 11:21:17.507: INFO: Waiting for observed generation 2 Aug 17 11:21:19.791: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 17 11:21:19.798: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 17 11:21:19.803: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 17 11:21:19.821: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 17 11:21:19.821: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 17 11:21:19.825: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 17 11:21:19.878: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 17 11:21:19.878: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 17 11:21:20.071: INFO: Updating deployment webserver-deployment Aug 17 11:21:20.072: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 17 11:21:20.538: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 17 11:21:23.371: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 17 11:21:23.736: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-751 /apis/apps/v1/namespaces/deployment-751/deployments/webserver-deployment 06517ba1-54db-4b38-9dcd-813a09401f36 705081 3 2020-08-17 11:21:01 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-17 11:21:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003684938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-17 11:21:20 +0000 UTC,LastTransitionTime:2020-08-17 11:21:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-08-17 11:21:21 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 17 11:21:24.084: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-751 /apis/apps/v1/namespaces/deployment-751/replicasets/webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 705067 3 2020-08-17 11:21:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 06517ba1-54db-4b38-9dcd-813a09401f36 0x4003534e67 0x4003534e68}] [] [{kube-controller-manager Update apps/v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06517ba1-54db-4b38-9dcd-813a09401f36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003534ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 17 11:21:24.084: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 17 11:21:24.085: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-751 /apis/apps/v1/namespaces/deployment-751/replicasets/webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 705075 3 2020-08-17 11:21:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 06517ba1-54db-4b38-9dcd-813a09401f36 0x4003534f47 0x4003534f48}] [] [{kube-controller-manager Update apps/v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06517ba1-54db-4b38-9dcd-813a09401f36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003534fc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 17 11:21:24.488: INFO: Pod "webserver-deployment-795d758f88-4xw6z" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4xw6z webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-4xw6z 47f47c9d-8b04-43e8-afce-7c40a62f8bd1 704982 0 2020-08-17 11:21:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x40035354c7 0x40035354c8}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.489: INFO: Pod "webserver-deployment-795d758f88-bhxbg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bhxbg webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-bhxbg 379f793f-1f1e-4e1c-98e8-51224bfa8434 705059 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4003535800 0x4003535801}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.490: INFO: Pod "webserver-deployment-795d758f88-bx945" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bx945 webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-bx945 d7b085a0-2c0b-4828-982f-004030fec89d 705107 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4003535b40 0x4003535b41}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.491: INFO: Pod "webserver-deployment-795d758f88-ff8jh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ff8jh webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-ff8jh 586e2c9f-be2d-4041-b045-400e6e7c0144 705098 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4003535f20 0x4003535f21}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.493: INFO: Pod "webserver-deployment-795d758f88-gj5f2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-gj5f2 webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-gj5f2 10b799a5-cd3f-4751-a672-c171f7c4fb0b 705126 0 2020-08-17 11:21:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4000f08160 0x4000f08161}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.211\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.211,StartTime:2020-08-17 11:21:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.211,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.494: INFO: Pod "webserver-deployment-795d758f88-h4gn6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-h4gn6 webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-h4gn6 2da278cc-5cad-401e-8600-459c8bc437bc 705111 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4000f08440 0x4000f08441}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.495: INFO: Pod "webserver-deployment-795d758f88-htcrc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-htcrc webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-htcrc 8d96fbf9-a8f7-4248-a027-bac9d79e5d2d 704977 0 2020-08-17 11:21:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4000f08630 0x4000f08631}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.496: INFO: Pod "webserver-deployment-795d758f88-krwgq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-krwgq webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-krwgq 23f4d892-b140-4db3-ab35-96674d1917cb 705124 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4000f08800 0x4000f08801}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.497: INFO: Pod "webserver-deployment-795d758f88-kxc8s" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kxc8s webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-kxc8s c10254e3-303f-4190-83f6-d9d4570688e5 705077 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4000f089a0 0x4000f089a1}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.498: INFO: Pod "webserver-deployment-795d758f88-kzlqg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kzlqg webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-kzlqg 52cb821c-6a35-42e9-b70b-fe362f56cfd8 704970 0 2020-08-17 11:21:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4000f08b50 0x4000f08b51}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.500: INFO: Pod "webserver-deployment-795d758f88-qsdt6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qsdt6 webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-qsdt6 d9e51950-150a-4008-838c-0d513882d560 705092 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4000f08cf0 0x4000f08cf1}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.501: INFO: Pod "webserver-deployment-795d758f88-ttxjb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ttxjb webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-ttxjb 625edf3c-2940-4217-888f-2eb1b79e4b4a 705105 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4000f08eb0 0x4000f08eb1}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.502: INFO: Pod "webserver-deployment-795d758f88-vnzhx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vnzhx webserver-deployment-795d758f88- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-795d758f88-vnzhx 832d77e5-ca45-4b28-8e12-8d71823c723d 705109 0 2020-08-17 11:21:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 69d300f3-281d-4f42-865a-496950f82dec 0x4000f09060 0x4000f09061}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69d300f3-281d-4f42-865a-496950f82dec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.191\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.191,StartTime:2020-08-17 11:21:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.191,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.503: INFO: Pod "webserver-deployment-dd94f59b7-2dldm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2dldm webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-2dldm d28b4cea-787a-44a1-94ad-2c053d50e723 704903 0 2020-08-17 11:21:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000f09240 0x4000f09241}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.189\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.189,StartTime:2020-08-17 11:21:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 11:21:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://561eb0415d7ff3b770d1c10d22fb6a9f188d9ce4617a6780c0ede0c547ede378,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.504: INFO: Pod "webserver-deployment-dd94f59b7-2k5cc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2k5cc webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-2k5cc 576320d5-6e54-40ca-b87d-c8db10436cf4 705088 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000f093e7 0x4000f093e8}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.505: INFO: Pod "webserver-deployment-dd94f59b7-2sz6c" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2sz6c webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-2sz6c 1b274595-c623-43b2-ab62-8e852d8332c6 705053 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000f095c7 0x4000f095c8}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.506: INFO: Pod "webserver-deployment-dd94f59b7-4gvhq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4gvhq webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-4gvhq 8985081b-0f5a-4951-9437-b1c077bee4da 705119 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000f09777 0x4000f09778}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.507: INFO: Pod "webserver-deployment-dd94f59b7-8km6k" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8km6k webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-8km6k 8bdbb0d6-5f46-4d94-a775-0e8be11e6bfa 704874 0 2020-08-17 11:21:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000f09917 0x4000f09918}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.187\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.187,StartTime:2020-08-17 11:21:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 11:21:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a88c918dd7a45b1e13a7945048818c2fc03248a953c7e01c6be3be04964538e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.508: INFO: Pod "webserver-deployment-dd94f59b7-9bdgd" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9bdgd webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-9bdgd 69fd3196-ec8e-457f-82b2-2578db031cc1 705094 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000f09ad7 0x4000f09ad8}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.509: INFO: Pod "webserver-deployment-dd94f59b7-dvmtt" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dvmtt webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-dvmtt 640c7932-1755-4bf7-abec-9f321e32dc0f 705076 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000f09c67 0x4000f09c68}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.510: INFO: Pod "webserver-deployment-dd94f59b7-dxfhl" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dxfhl webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-dxfhl beed5abc-23dd-4a5c-8db9-3339073f1d9a 705121 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000f09df7 0x4000f09df8}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.511: INFO: Pod "webserver-deployment-dd94f59b7-fl9vb" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fl9vb webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-fl9vb 1d561fe4-2dc5-4658-a568-b719bd1a9403 705099 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000f09f97 0x4000f09f98}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.513: INFO: Pod "webserver-deployment-dd94f59b7-hstp5" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hstp5 webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-hstp5 a0d92a40-b0f4-404c-b377-98becc5a8ffb 705085 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x40000578d7 0x40000578d8}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.515: INFO: Pod "webserver-deployment-dd94f59b7-j5qmj" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-j5qmj webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-j5qmj 05053d76-d69e-4dbf-9dbe-814d37d421f5 704912 0 2020-08-17 11:21:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x400098c237 0x400098c238}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.209\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.209,StartTime:2020-08-17 11:21:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 11:21:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://474f335f870e098a42b8ec4e3cd80d45779f2d8df539b611ffc8a1399f0fefcb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.518: INFO: Pod "webserver-deployment-dd94f59b7-kh8wq" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kh8wq webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-kh8wq baa85007-640c-4b41-8ab4-ff7aa6ffbcda 704895 0 2020-08-17 11:21:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x400098c5d7 0x400098c5d8}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.206\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.206,StartTime:2020-08-17 11:21:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 11:21:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cc7c238879091f3232bdc1f664f3b639dcc4832f17178010bd07e9b3162fd20a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.520: INFO: Pod "webserver-deployment-dd94f59b7-kpb8q" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kpb8q webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-kpb8q 03357fdc-9b64-414b-9b2e-dfd1d4336bc4 704909 0 2020-08-17 11:21:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x400098c927 0x400098c928}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.210\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.210,StartTime:2020-08-17 11:21:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 11:21:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2b451f2f4150389ab5cde7647810ff337faecb77dfa5910cc21aafee6ef8069c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.521: INFO: Pod "webserver-deployment-dd94f59b7-lz5xw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lz5xw webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-lz5xw c77c2977-4e9b-43c8-83b4-5ee3d14e1cf7 704897 0 2020-08-17 11:21:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x400098cf37 0x400098cf38}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.190\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.190,StartTime:2020-08-17 11:21:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 11:21:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://75bb10545e4a7f1c39e205ea2af3f68550eb5a125a82a19cd86da556b6842ec1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.190,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.523: INFO: Pod "webserver-deployment-dd94f59b7-lzkjn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lzkjn webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-lzkjn 71bdbb85-cded-4e00-8027-1ffe8c3c438c 705114 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000b7e457 0x4000b7e458}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.524: INFO: Pod "webserver-deployment-dd94f59b7-nsdbj" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nsdbj webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-nsdbj 9d19f722-281c-4874-bbe7-adbed4daaa86 704889 0 2020-08-17 11:21:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000b7e727 0x4000b7e728}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.186\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.186,StartTime:2020-08-17 11:21:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 11:21:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://85ca37d0d155372c4d43de52e83903fcedd1a5cb0d050b7aa4b9c665cbacf780,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.186,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.525: INFO: Pod "webserver-deployment-dd94f59b7-q9wnz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-q9wnz webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-q9wnz b00891f5-cb9b-4f98-8b0a-052dd015282e 705103 0 2020-08-17 11:21:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000b7ead7 0x4000b7ead8}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.526: INFO: Pod "webserver-deployment-dd94f59b7-trxts" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-trxts webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-trxts 086c85a5-43c5-46b5-83b8-5570da91b8a3 704884 0 2020-08-17 11:21:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000b7ed47 0x4000b7ed48}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.188\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.188,StartTime:2020-08-17 11:21:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 11:21:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1171863147efee32eeeae3f9f6d35b26a64612808db9a4c5df25a7c5a2ee4f78,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.527: INFO: Pod "webserver-deployment-dd94f59b7-vmjss" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vmjss webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-vmjss 709ca10e-46e4-4d04-bd1b-03555d0f91bb 705112 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000b7f327 0x4000b7f328}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-17 11:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 17 11:21:24.529: INFO: Pod "webserver-deployment-dd94f59b7-x4m66" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-x4m66 webserver-deployment-dd94f59b7- deployment-751 /api/v1/namespaces/deployment-751/pods/webserver-deployment-dd94f59b7-x4m66 5d61856b-c69b-42c5-9c64-a073e7ae2c5d 705113 0 2020-08-17 11:21:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0ad787f3-ac10-44fb-99da-bc05a972ba24 0x4000b7ff57 0x4000b7ff58}] [] [{kube-controller-manager Update v1 2020-08-17 11:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ad787f3-ac10-44fb-99da-bc05a972ba24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:21:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54xzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54xzq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54xzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:21:24.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-751" for this suite. • [SLOW TEST:23.151 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":36,"skipped":584,"failed":0} [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:21:24.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 17 11:21:24.994: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-a e0992c16-3bb9-46d9-ad61-9b9e8527c9f3 705138 0 2020-08-17 11:21:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 11:21:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:21:24.995: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-a e0992c16-3bb9-46d9-ad61-9b9e8527c9f3 705138 0 2020-08-17 11:21:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 11:21:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 17 11:21:35.310: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-a e0992c16-3bb9-46d9-ad61-9b9e8527c9f3 705185 0 2020-08-17 11:21:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 11:21:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:21:35.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-a e0992c16-3bb9-46d9-ad61-9b9e8527c9f3 705185 0 2020-08-17 11:21:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 11:21:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 17 11:21:45.843: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-a e0992c16-3bb9-46d9-ad61-9b9e8527c9f3 705406 0 2020-08-17 11:21:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 11:21:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:21:45.845: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-a e0992c16-3bb9-46d9-ad61-9b9e8527c9f3 705406 0 2020-08-17 11:21:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 11:21:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 17 11:21:55.856: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-a e0992c16-3bb9-46d9-ad61-9b9e8527c9f3 705454 0 2020-08-17 11:21:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 11:21:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:21:55.857: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-a e0992c16-3bb9-46d9-ad61-9b9e8527c9f3 705454 0 2020-08-17 11:21:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-17 11:21:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 17 11:22:05.869: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-b 2a4a8716-5958-4daf-a4c2-6902decd9bb0 705484 0 2020-08-17 11:22:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-17 11:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:22:05.871: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-b 2a4a8716-5958-4daf-a4c2-6902decd9bb0 705484 0 2020-08-17 11:22:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-17 11:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 17 11:22:15.883: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-b 2a4a8716-5958-4daf-a4c2-6902decd9bb0 705514 0 2020-08-17 11:22:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-17 11:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:22:15.885: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9650 /api/v1/namespaces/watch-9650/configmaps/e2e-watch-test-configmap-b 2a4a8716-5958-4daf-a4c2-6902decd9bb0 705514 0 2020-08-17 11:22:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-17 11:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:22:25.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9650" for this suite. • [SLOW TEST:61.823 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":37,"skipped":584,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:22:26.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:23:05.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1408" for this suite. STEP: Destroying namespace "nsdeletetest-3723" for this suite. Aug 17 11:23:05.886: INFO: Namespace nsdeletetest-3723 was already deleted STEP: Destroying namespace "nsdeletetest-5461" for this suite. • [SLOW TEST:39.391 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":38,"skipped":586,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:23:05.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:23:24.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6721" for this suite. • [SLOW TEST:18.548 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":39,"skipped":592,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:23:24.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Aug 17 11:23:25.081: INFO: >>> kubeConfig: /root/.kube/config Aug 17 11:23:46.880: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:25:01.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1587" for this suite. • [SLOW TEST:97.099 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":40,"skipped":594,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:25:01.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-3b6498ff-9a34-40fa-a9a0-74d766ac68f2 in namespace container-probe-8081 Aug 17 11:25:06.072: INFO: Started pod test-webserver-3b6498ff-9a34-40fa-a9a0-74d766ac68f2 in namespace container-probe-8081 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 11:25:06.093: INFO: Initial restart count of pod test-webserver-3b6498ff-9a34-40fa-a9a0-74d766ac68f2 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:29:08.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8081" for this suite. • [SLOW TEST:247.003 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:29:08.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-78584594-19ba-4e14-8822-75b30117d24b STEP: Creating a pod to test consume configMaps Aug 17 11:29:09.123: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-269cbc1f-b20b-4b07-9590-10313313062b" in namespace "projected-1146" to be "Succeeded or Failed" Aug 17 11:29:09.134: INFO: Pod "pod-projected-configmaps-269cbc1f-b20b-4b07-9590-10313313062b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.681946ms Aug 17 11:29:11.141: INFO: Pod "pod-projected-configmaps-269cbc1f-b20b-4b07-9590-10313313062b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018088452s Aug 17 11:29:13.212: INFO: Pod "pod-projected-configmaps-269cbc1f-b20b-4b07-9590-10313313062b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088369385s Aug 17 11:29:15.218: INFO: Pod "pod-projected-configmaps-269cbc1f-b20b-4b07-9590-10313313062b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094696939s STEP: Saw pod success Aug 17 11:29:15.218: INFO: Pod "pod-projected-configmaps-269cbc1f-b20b-4b07-9590-10313313062b" satisfied condition "Succeeded or Failed" Aug 17 11:29:15.223: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-269cbc1f-b20b-4b07-9590-10313313062b container projected-configmap-volume-test: STEP: delete the pod Aug 17 11:29:15.548: INFO: Waiting for pod pod-projected-configmaps-269cbc1f-b20b-4b07-9590-10313313062b to disappear Aug 17 11:29:15.552: INFO: Pod pod-projected-configmaps-269cbc1f-b20b-4b07-9590-10313313062b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:29:15.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1146" for this suite. • [SLOW TEST:7.013 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":42,"skipped":647,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:29:15.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Aug 17 11:29:15.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f -' Aug 17 11:29:27.274: INFO: stderr: "" Aug 17 11:29:27.274: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Aug 17 11:29:27.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config diff -f -' Aug 17 11:29:31.350: INFO: rc: 1 Aug 17 11:29:31.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete -f -' Aug 17 11:29:32.888: INFO: stderr: "" Aug 17 11:29:32.888: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:29:32.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5539" for this suite. • [SLOW TEST:17.338 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:888 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":43,"skipped":650,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:29:32.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-678f5d02-e1c8-4071-a332-f43c52921952 STEP: Creating a pod to test consume secrets Aug 17 11:29:33.358: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e76bcfe-7e60-47e2-bdcd-d6b331c245b8" in namespace "projected-2765" to be "Succeeded or Failed" Aug 17 11:29:33.386: INFO: Pod "pod-projected-secrets-5e76bcfe-7e60-47e2-bdcd-d6b331c245b8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.613771ms Aug 17 11:29:35.402: INFO: Pod "pod-projected-secrets-5e76bcfe-7e60-47e2-bdcd-d6b331c245b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043939596s Aug 17 11:29:37.416: INFO: Pod "pod-projected-secrets-5e76bcfe-7e60-47e2-bdcd-d6b331c245b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058124592s Aug 17 11:29:39.792: INFO: Pod "pod-projected-secrets-5e76bcfe-7e60-47e2-bdcd-d6b331c245b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.433996007s STEP: Saw pod success Aug 17 11:29:39.792: INFO: Pod "pod-projected-secrets-5e76bcfe-7e60-47e2-bdcd-d6b331c245b8" satisfied condition "Succeeded or Failed" Aug 17 11:29:39.990: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5e76bcfe-7e60-47e2-bdcd-d6b331c245b8 container projected-secret-volume-test: STEP: delete the pod Aug 17 11:29:40.134: INFO: Waiting for pod pod-projected-secrets-5e76bcfe-7e60-47e2-bdcd-d6b331c245b8 to disappear Aug 17 11:29:40.146: INFO: Pod pod-projected-secrets-5e76bcfe-7e60-47e2-bdcd-d6b331c245b8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:29:40.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2765" for this suite. • [SLOW TEST:7.260 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":650,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:29:40.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:29:40.369: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 17 11:30:01.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3981 create -f -' Aug 17 11:30:10.913: INFO: stderr: "" Aug 17 11:30:10.913: INFO: stdout: "e2e-test-crd-publish-openapi-170-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 17 11:30:10.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3981 delete e2e-test-crd-publish-openapi-170-crds test-cr' Aug 17 11:30:12.286: INFO: stderr: "" Aug 17 11:30:12.286: INFO: stdout: "e2e-test-crd-publish-openapi-170-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 17 11:30:12.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3981 apply -f -' Aug 17 11:30:15.149: INFO: stderr: "" Aug 17 11:30:15.149: INFO: stdout: "e2e-test-crd-publish-openapi-170-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 17 11:30:15.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3981 delete e2e-test-crd-publish-openapi-170-crds test-cr' Aug 17 11:30:16.707: INFO: stderr: "" Aug 17 11:30:16.707: INFO: stdout: "e2e-test-crd-publish-openapi-170-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 17 11:30:16.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-170-crds' Aug 17 11:30:20.901: INFO: stderr: "" Aug 17 11:30:20.901: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-170-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:30:42.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3981" for this suite. • [SLOW TEST:62.279 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":45,"skipped":653,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:30:42.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 17 11:30:42.547: INFO: Waiting up to 5m0s for pod "pod-a78ee5b7-66a7-4b6f-9e6d-8e738cbbeffc" in namespace "emptydir-8454" to be "Succeeded or Failed" Aug 17 11:30:42.575: INFO: Pod "pod-a78ee5b7-66a7-4b6f-9e6d-8e738cbbeffc": Phase="Pending", Reason="", readiness=false. Elapsed: 27.623841ms Aug 17 11:30:44.722: INFO: Pod "pod-a78ee5b7-66a7-4b6f-9e6d-8e738cbbeffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174757666s Aug 17 11:30:46.729: INFO: Pod "pod-a78ee5b7-66a7-4b6f-9e6d-8e738cbbeffc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182250072s Aug 17 11:30:48.756: INFO: Pod "pod-a78ee5b7-66a7-4b6f-9e6d-8e738cbbeffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.208607198s STEP: Saw pod success Aug 17 11:30:48.756: INFO: Pod "pod-a78ee5b7-66a7-4b6f-9e6d-8e738cbbeffc" satisfied condition "Succeeded or Failed" Aug 17 11:30:48.760: INFO: Trying to get logs from node latest-worker pod pod-a78ee5b7-66a7-4b6f-9e6d-8e738cbbeffc container test-container: STEP: delete the pod Aug 17 11:30:48.809: INFO: Waiting for pod pod-a78ee5b7-66a7-4b6f-9e6d-8e738cbbeffc to disappear Aug 17 11:30:48.818: INFO: Pod pod-a78ee5b7-66a7-4b6f-9e6d-8e738cbbeffc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:30:48.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8454" for this suite. • [SLOW TEST:6.387 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":46,"skipped":668,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:30:48.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 17 11:30:49.087: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-896 /api/v1/namespaces/watch-896/configmaps/e2e-watch-test-label-changed 005ed0a9-d3c6-4b95-b8c7-f8ecdf9cc88a 707169 0 2020-08-17 11:30:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 11:30:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:30:49.089: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-896 /api/v1/namespaces/watch-896/configmaps/e2e-watch-test-label-changed 005ed0a9-d3c6-4b95-b8c7-f8ecdf9cc88a 707170 0 2020-08-17 11:30:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 11:30:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:30:49.090: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-896 /api/v1/namespaces/watch-896/configmaps/e2e-watch-test-label-changed 005ed0a9-d3c6-4b95-b8c7-f8ecdf9cc88a 707171 0 2020-08-17 11:30:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 11:30:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 17 11:30:59.180: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-896 /api/v1/namespaces/watch-896/configmaps/e2e-watch-test-label-changed 005ed0a9-d3c6-4b95-b8c7-f8ecdf9cc88a 707210 0 2020-08-17 11:30:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 11:30:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:30:59.182: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-896 /api/v1/namespaces/watch-896/configmaps/e2e-watch-test-label-changed 005ed0a9-d3c6-4b95-b8c7-f8ecdf9cc88a 707211 0 2020-08-17 11:30:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 11:30:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 11:30:59.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-896 /api/v1/namespaces/watch-896/configmaps/e2e-watch-test-label-changed 005ed0a9-d3c6-4b95-b8c7-f8ecdf9cc88a 707212 0 2020-08-17 11:30:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-17 11:30:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:30:59.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-896" for this suite. • [SLOW TEST:10.375 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":47,"skipped":672,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:30:59.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3d5db68b-8da7-4d4f-bfdf-1c4fdf124bfa STEP: Creating a pod to test consume configMaps Aug 17 11:30:59.295: INFO: Waiting up to 5m0s for pod "pod-configmaps-0afaa787-3693-4dca-bb3f-91c40a02ab07" in namespace "configmap-2569" to be "Succeeded or Failed" Aug 17 11:30:59.320: INFO: Pod "pod-configmaps-0afaa787-3693-4dca-bb3f-91c40a02ab07": Phase="Pending", Reason="", readiness=false. Elapsed: 24.973062ms Aug 17 11:31:01.327: INFO: Pod "pod-configmaps-0afaa787-3693-4dca-bb3f-91c40a02ab07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03163245s Aug 17 11:31:03.333: INFO: Pod "pod-configmaps-0afaa787-3693-4dca-bb3f-91c40a02ab07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038461632s STEP: Saw pod success Aug 17 11:31:03.334: INFO: Pod "pod-configmaps-0afaa787-3693-4dca-bb3f-91c40a02ab07" satisfied condition "Succeeded or Failed" Aug 17 11:31:03.339: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0afaa787-3693-4dca-bb3f-91c40a02ab07 container configmap-volume-test: STEP: delete the pod Aug 17 11:31:03.542: INFO: Waiting for pod pod-configmaps-0afaa787-3693-4dca-bb3f-91c40a02ab07 to disappear Aug 17 11:31:03.553: INFO: Pod pod-configmaps-0afaa787-3693-4dca-bb3f-91c40a02ab07 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:03.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2569" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:03.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:31:03.631: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:05.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6232" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":49,"skipped":760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:05.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:31:05.295: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 17 11:31:05.317: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 17 11:31:10.341: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 17 11:31:10.342: INFO: Creating deployment "test-rolling-update-deployment" Aug 17 11:31:10.373: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 17 11:31:10.469: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set Aug 17 11:31:12.676: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 17 11:31:12.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260670, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260670, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260670, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260670, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 11:31:14.690: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 17 11:31:14.706: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8328 /apis/apps/v1/namespaces/deployment-8328/deployments/test-rolling-update-deployment b40d88ea-beb5-4ddf-8d73-1ce74a5f2ea0 707356 1 2020-08-17 11:31:10 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-08-17 11:31:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-17 11:31:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003063fa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-17 11:31:10 +0000 UTC,LastTransitionTime:2020-08-17 11:31:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-08-17 11:31:14 +0000 UTC,LastTransitionTime:2020-08-17 11:31:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 17 11:31:14.715: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-8328 /apis/apps/v1/namespaces/deployment-8328/replicasets/test-rolling-update-deployment-c4cb8d6d9 660740ec-64e4-46d8-8fb5-d66fbbe02cbf 707345 1 2020-08-17 11:31:10 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b40d88ea-beb5-4ddf-8d73-1ce74a5f2ea0 0x40031869a0 0x40031869a1}] [] [{kube-controller-manager Update apps/v1 2020-08-17 11:31:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b40d88ea-beb5-4ddf-8d73-1ce74a5f2ea0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003186a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 17 11:31:14.715: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 17 11:31:14.716: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8328 /apis/apps/v1/namespaces/deployment-8328/replicasets/test-rolling-update-controller 566c4aef-48fc-4fe5-bcba-077fe96c7ebd 707355 2 2020-08-17 11:31:05 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b40d88ea-beb5-4ddf-8d73-1ce74a5f2ea0 0x40031867d7 0x40031867d8}] [] [{e2e.test Update apps/v1 2020-08-17 11:31:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-17 11:31:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b40d88ea-beb5-4ddf-8d73-1ce74a5f2ea0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x40031868f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 17 11:31:14.723: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-kqmsv" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-kqmsv test-rolling-update-deployment-c4cb8d6d9- deployment-8328 /api/v1/namespaces/deployment-8328/pods/test-rolling-update-deployment-c4cb8d6d9-kqmsv f991798a-e120-4056-b236-f0f3082d73ff 707344 0 2020-08-17 11:31:10 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 660740ec-64e4-46d8-8fb5-d66fbbe02cbf 0x4003187360 0x4003187361}] [] [{kube-controller-manager Update v1 2020-08-17 11:31:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"660740ec-64e4-46d8-8fb5-d66fbbe02cbf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:31:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.229\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7tjxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7tjxf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7tjxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:31:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:31:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:31:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:31:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.229,StartTime:2020-08-17 11:31:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 11:31:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://3da94c1408174378bfe750947a395f9eb2160ed8e5d2409917c4ca9f0c69e7a4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:14.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8328" for this suite. • [SLOW TEST:9.524 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":50,"skipped":792,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:14.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 11:31:15.021: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 11:31:15.039: INFO: Waiting for terminating namespaces to be deleted... Aug 17 11:31:15.044: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 11:31:15.053: INFO: test-rolling-update-deployment-c4cb8d6d9-kqmsv from deployment-8328 started at 2020-08-17 11:31:10 +0000 UTC (1 container statuses recorded) Aug 17 11:31:15.054: INFO: Container agnhost ready: true, restart count 0 Aug 17 11:31:15.054: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 11:31:15.054: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 11:31:15.054: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 11:31:15.054: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 11:31:15.054: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 11:31:15.061: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 11:31:15.061: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 11:31:15.061: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 11:31:15.061: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162c0ad4649b9947], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.162c0ad46c6b4cb6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:16.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9822" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":51,"skipped":801,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:16.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:31:16.257: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-bd6b9e28-fad3-48db-8e86-b8c812dd0727" in namespace "security-context-test-3852" to be "Succeeded or Failed" Aug 17 11:31:16.267: INFO: Pod "alpine-nnp-false-bd6b9e28-fad3-48db-8e86-b8c812dd0727": Phase="Pending", Reason="", readiness=false. Elapsed: 9.788226ms Aug 17 11:31:18.275: INFO: Pod "alpine-nnp-false-bd6b9e28-fad3-48db-8e86-b8c812dd0727": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01754073s Aug 17 11:31:20.284: INFO: Pod "alpine-nnp-false-bd6b9e28-fad3-48db-8e86-b8c812dd0727": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026723203s Aug 17 11:31:20.284: INFO: Pod "alpine-nnp-false-bd6b9e28-fad3-48db-8e86-b8c812dd0727" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:20.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3852" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":52,"skipped":814,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:20.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 11:31:25.028: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 11:31:27.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260685, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260685, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260685, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260684, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 11:31:30.087: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 17 11:31:30.121: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:30.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5462" for this suite. STEP: Destroying namespace "webhook-5462-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.663 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":53,"skipped":840,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:30.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 17 11:31:30.487: INFO: namespace kubectl-245 Aug 17 11:31:30.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-245' Aug 17 11:31:33.374: INFO: stderr: "" Aug 17 11:31:33.374: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 17 11:31:34.385: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:31:34.385: INFO: Found 0 / 1 Aug 17 11:31:35.387: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:31:35.387: INFO: Found 0 / 1 Aug 17 11:31:36.397: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:31:36.398: INFO: Found 1 / 1 Aug 17 11:31:36.399: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 17 11:31:36.406: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:31:36.406: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 17 11:31:36.407: INFO: wait on agnhost-primary startup in kubectl-245 Aug 17 11:31:36.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs agnhost-primary-mbgjz agnhost-primary --namespace=kubectl-245' Aug 17 11:31:37.835: INFO: stderr: "" Aug 17 11:31:37.835: INFO: stdout: "Paused\n" STEP: exposing RC Aug 17 11:31:37.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-245' Aug 17 11:31:39.549: INFO: stderr: "" Aug 17 11:31:39.549: INFO: stdout: "service/rm2 exposed\n" Aug 17 11:31:39.660: INFO: Service rm2 in namespace kubectl-245 found. STEP: exposing service Aug 17 11:31:41.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-245' Aug 17 11:31:43.130: INFO: stderr: "" Aug 17 11:31:43.130: INFO: stdout: "service/rm3 exposed\n" Aug 17 11:31:43.155: INFO: Service rm3 in namespace kubectl-245 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:45.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-245" for this suite. • [SLOW TEST:14.817 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":54,"skipped":862,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:45.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 17 11:31:45.270: INFO: Waiting up to 5m0s for pod "pod-d768f043-c4c6-42d2-a025-453d85812fd5" in namespace "emptydir-4554" to be "Succeeded or Failed" Aug 17 11:31:45.286: INFO: Pod "pod-d768f043-c4c6-42d2-a025-453d85812fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.192024ms Aug 17 11:31:47.307: INFO: Pod "pod-d768f043-c4c6-42d2-a025-453d85812fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036771515s Aug 17 11:31:49.315: INFO: Pod "pod-d768f043-c4c6-42d2-a025-453d85812fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044402353s Aug 17 11:31:51.322: INFO: Pod "pod-d768f043-c4c6-42d2-a025-453d85812fd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051614413s STEP: Saw pod success Aug 17 11:31:51.322: INFO: Pod "pod-d768f043-c4c6-42d2-a025-453d85812fd5" satisfied condition "Succeeded or Failed" Aug 17 11:31:51.326: INFO: Trying to get logs from node latest-worker pod pod-d768f043-c4c6-42d2-a025-453d85812fd5 container test-container: STEP: delete the pod Aug 17 11:31:51.399: INFO: Waiting for pod pod-d768f043-c4c6-42d2-a025-453d85812fd5 to disappear Aug 17 11:31:51.456: INFO: Pod pod-d768f043-c4c6-42d2-a025-453d85812fd5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:51.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4554" for this suite. • [SLOW TEST:6.298 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":55,"skipped":865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:51.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-3253/configmap-test-faf9d17e-f6c1-44cd-b8c5-c65efcf9f229 STEP: Creating a pod to test consume configMaps Aug 17 11:31:51.579: INFO: Waiting up to 5m0s for pod "pod-configmaps-03873ba8-c804-4174-97d5-13a4ffa91365" in namespace "configmap-3253" to be "Succeeded or Failed" Aug 17 11:31:51.613: INFO: Pod "pod-configmaps-03873ba8-c804-4174-97d5-13a4ffa91365": Phase="Pending", Reason="", readiness=false. Elapsed: 33.883369ms Aug 17 11:31:53.619: INFO: Pod "pod-configmaps-03873ba8-c804-4174-97d5-13a4ffa91365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040137016s Aug 17 11:31:55.626: INFO: Pod "pod-configmaps-03873ba8-c804-4174-97d5-13a4ffa91365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046926479s STEP: Saw pod success Aug 17 11:31:55.626: INFO: Pod "pod-configmaps-03873ba8-c804-4174-97d5-13a4ffa91365" satisfied condition "Succeeded or Failed" Aug 17 11:31:55.630: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-03873ba8-c804-4174-97d5-13a4ffa91365 container env-test: STEP: delete the pod Aug 17 11:31:55.709: INFO: Waiting for pod pod-configmaps-03873ba8-c804-4174-97d5-13a4ffa91365 to disappear Aug 17 11:31:55.717: INFO: Pod pod-configmaps-03873ba8-c804-4174-97d5-13a4ffa91365 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:55.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3253" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":56,"skipped":891,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:55.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:31:55.789: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:31:56.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9764" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":57,"skipped":904,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:31:56.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 17 11:32:05.061: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 17 11:32:05.086: INFO: Pod pod-with-prestop-exec-hook still exists Aug 17 11:32:07.086: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 17 11:32:07.094: INFO: Pod pod-with-prestop-exec-hook still exists Aug 17 11:32:09.086: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 17 11:32:09.093: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:32:09.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9748" for this suite. • [SLOW TEST:12.243 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":58,"skipped":921,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:32:09.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:32:09.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1841' Aug 17 11:32:11.586: INFO: stderr: "" Aug 17 11:32:11.586: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Aug 17 11:32:11.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1841' Aug 17 11:32:13.804: INFO: stderr: "" Aug 17 11:32:13.805: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 17 11:32:14.835: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:32:14.836: INFO: Found 0 / 1 Aug 17 11:32:15.811: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:32:15.811: INFO: Found 1 / 1 Aug 17 11:32:15.812: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 17 11:32:15.816: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:32:15.816: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 17 11:32:15.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe pod agnhost-primary-swr5k --namespace=kubectl-1841' Aug 17 11:32:17.297: INFO: stderr: "" Aug 17 11:32:17.297: INFO: stdout: "Name: agnhost-primary-swr5k\nNamespace: kubectl-1841\nPriority: 0\nNode: latest-worker2/172.18.0.14\nStart Time: Mon, 17 Aug 2020 11:32:11 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.209\nIPs:\n IP: 10.244.1.209\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://0046ff5e3dc18bf4b98a74cf82e643d3ec2211de5284074987d0c8fc45461002\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 17 Aug 2020 11:32:14 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-lh9kz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-lh9kz:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-lh9kz\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s Successfully assigned kubectl-1841/agnhost-primary-swr5k to latest-worker2\n Normal Pulled 5s kubelet, latest-worker2 Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 3s kubelet, latest-worker2 Created container agnhost-primary\n Normal Started 3s kubelet, latest-worker2 Started container agnhost-primary\n" Aug 17 11:32:17.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-1841' Aug 17 11:32:19.150: INFO: stderr: "" Aug 17 11:32:19.150: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1841\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: agnhost-primary-swr5k\n" Aug 17 11:32:19.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-1841' Aug 17 11:32:21.009: INFO: stderr: "" Aug 17 11:32:21.009: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1841\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.107.171.37\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.209:6379\nSession Affinity: None\nEvents: \n" Aug 17 11:32:21.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe node latest-control-plane' Aug 17 11:32:22.871: INFO: stderr: "" Aug 17 11:32:22.871: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:42:01 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 17 Aug 2020 11:32:20 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 17 Aug 2020 11:29:35 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 17 Aug 2020 11:29:35 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 17 Aug 2020 11:29:35 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 17 Aug 2020 11:29:35 +0000 Sat, 15 Aug 2020 09:42:31 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.12\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 355da13825784523b4a253c23edd1334\n System UUID: 8f367e0f-042b-45ff-9966-5ca6bcc1cc56\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.19.0-rc.1\n Kube-Proxy Version: v1.19.0-rc.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-f7hdg 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d1h\n kube-system coredns-f9fd979d6-vxzgb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d1h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d1h\n kube-system kindnet-qmj2d 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d1h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 2d1h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 2d1h\n kube-system kube-proxy-8zfjc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d1h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d1h\n local-path-storage local-path-provisioner-8b46957d4-csnr8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d1h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 17 11:32:22.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe namespace kubectl-1841' Aug 17 11:32:25.506: INFO: stderr: "" Aug 17 11:32:25.507: INFO: stdout: "Name: kubectl-1841\nLabels: e2e-framework=kubectl\n e2e-run=6247fac7-7b4a-49ee-8e2e-c02fa38d14a8\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:32:25.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1841" for this suite. • [SLOW TEST:16.401 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":59,"skipped":932,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:32:25.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3353 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 17 11:32:25.633: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 17 11:32:25.744: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:32:27.902: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:32:29.799: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:32:31.832: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:32:33.752: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 11:32:35.751: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 11:32:37.750: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 11:32:40.112: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 11:32:41.750: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 11:32:43.751: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 11:32:45.752: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 17 11:32:45.763: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 17 11:32:47.771: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 17 11:32:49.794: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 17 11:32:56.140: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.234:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:32:56.140: INFO: >>> kubeConfig: /root/.kube/config I0817 11:32:56.192339 10 log.go:181] (0x4001e1e0b0) (0x40003995e0) Create stream I0817 11:32:56.192499 10 log.go:181] (0x4001e1e0b0) (0x40003995e0) Stream added, broadcasting: 1 I0817 11:32:56.196323 10 log.go:181] (0x4001e1e0b0) Reply frame received for 1 I0817 11:32:56.196506 10 log.go:181] (0x4001e1e0b0) (0x400747f9a0) Create stream I0817 11:32:56.196604 10 log.go:181] (0x4001e1e0b0) (0x400747f9a0) Stream added, broadcasting: 3 I0817 11:32:56.198502 10 log.go:181] (0x4001e1e0b0) Reply frame received for 3 I0817 11:32:56.198669 10 log.go:181] (0x4001e1e0b0) (0x400044aaa0) Create stream I0817 11:32:56.198755 10 log.go:181] (0x4001e1e0b0) (0x400044aaa0) Stream added, broadcasting: 5 I0817 11:32:56.200091 10 log.go:181] (0x4001e1e0b0) Reply frame received for 5 I0817 11:32:56.329061 10 log.go:181] (0x4001e1e0b0) Data frame received for 3 I0817 11:32:56.329240 10 log.go:181] (0x400747f9a0) (3) Data frame handling I0817 11:32:56.329387 10 log.go:181] (0x4001e1e0b0) Data frame received for 5 I0817 11:32:56.329563 10 log.go:181] (0x400044aaa0) (5) Data frame handling I0817 11:32:56.329728 10 log.go:181] (0x400747f9a0) (3) Data frame sent I0817 11:32:56.329871 10 log.go:181] (0x4001e1e0b0) Data frame received for 3 I0817 11:32:56.329985 10 log.go:181] (0x400747f9a0) (3) Data frame handling I0817 11:32:56.331137 10 log.go:181] (0x4001e1e0b0) Data frame received for 1 I0817 11:32:56.331244 10 log.go:181] (0x40003995e0) (1) Data frame handling I0817 11:32:56.331349 10 log.go:181] (0x40003995e0) (1) Data frame sent I0817 11:32:56.331457 10 log.go:181] (0x4001e1e0b0) (0x40003995e0) Stream removed, broadcasting: 1 I0817 11:32:56.331573 10 log.go:181] (0x4001e1e0b0) Go away received I0817 11:32:56.331889 10 log.go:181] (0x4001e1e0b0) (0x40003995e0) Stream removed, broadcasting: 1 I0817 11:32:56.332038 10 log.go:181] (0x4001e1e0b0) (0x400747f9a0) Stream removed, broadcasting: 3 I0817 11:32:56.332157 10 log.go:181] (0x4001e1e0b0) (0x400044aaa0) Stream removed, broadcasting: 5 Aug 17 11:32:56.333: INFO: Found all expected endpoints: [netserver-0] Aug 17 11:32:56.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.210:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:32:56.338: INFO: >>> kubeConfig: /root/.kube/config I0817 11:32:56.402757 10 log.go:181] (0x4000e17ad0) (0x4002ae3900) Create stream I0817 11:32:56.402937 10 log.go:181] (0x4000e17ad0) (0x4002ae3900) Stream added, broadcasting: 1 I0817 11:32:56.409107 10 log.go:181] (0x4000e17ad0) Reply frame received for 1 I0817 11:32:56.409355 10 log.go:181] (0x4000e17ad0) (0x4001418000) Create stream I0817 11:32:56.409486 10 log.go:181] (0x4000e17ad0) (0x4001418000) Stream added, broadcasting: 3 I0817 11:32:56.410999 10 log.go:181] (0x4000e17ad0) Reply frame received for 3 I0817 11:32:56.411142 10 log.go:181] (0x4000e17ad0) (0x4002ae39a0) Create stream I0817 11:32:56.411209 10 log.go:181] (0x4000e17ad0) (0x4002ae39a0) Stream added, broadcasting: 5 I0817 11:32:56.412610 10 log.go:181] (0x4000e17ad0) Reply frame received for 5 I0817 11:32:56.465922 10 log.go:181] (0x4000e17ad0) Data frame received for 3 I0817 11:32:56.466075 10 log.go:181] (0x4001418000) (3) Data frame handling I0817 11:32:56.466165 10 log.go:181] (0x4000e17ad0) Data frame received for 5 I0817 11:32:56.466279 10 log.go:181] (0x4002ae39a0) (5) Data frame handling I0817 11:32:56.466369 10 log.go:181] (0x4001418000) (3) Data frame sent I0817 11:32:56.466452 10 log.go:181] (0x4000e17ad0) Data frame received for 3 I0817 11:32:56.466510 10 log.go:181] (0x4001418000) (3) Data frame handling I0817 11:32:56.467984 10 log.go:181] (0x4000e17ad0) Data frame received for 1 I0817 11:32:56.468127 10 log.go:181] (0x4002ae3900) (1) Data frame handling I0817 11:32:56.468270 10 log.go:181] (0x4002ae3900) (1) Data frame sent I0817 11:32:56.468410 10 log.go:181] (0x4000e17ad0) (0x4002ae3900) Stream removed, broadcasting: 1 I0817 11:32:56.468572 10 log.go:181] (0x4000e17ad0) Go away received I0817 11:32:56.468903 10 log.go:181] (0x4000e17ad0) (0x4002ae3900) Stream removed, broadcasting: 1 I0817 11:32:56.469004 10 log.go:181] (0x4000e17ad0) (0x4001418000) Stream removed, broadcasting: 3 I0817 11:32:56.469140 10 log.go:181] (0x4000e17ad0) (0x4002ae39a0) Stream removed, broadcasting: 5 Aug 17 11:32:56.469: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:32:56.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3353" for this suite. • [SLOW TEST:30.962 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":943,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:32:56.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:32:56.534: INFO: Creating deployment "test-recreate-deployment" Aug 17 11:32:56.668: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 17 11:32:56.755: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 17 11:32:58.867: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 17 11:32:58.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 11:33:01.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 11:33:03.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260776, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 11:33:04.901: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 17 11:33:05.027: INFO: Updating deployment test-recreate-deployment Aug 17 11:33:05.028: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 17 11:33:06.946: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5115 /apis/apps/v1/namespaces/deployment-5115/deployments/test-recreate-deployment 0b2d965a-e5e4-42c0-93b3-45fde4a68a6d 708103 2 2020-08-17 11:32:56 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-17 11:33:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-17 11:33:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40069e6c88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-17 11:33:06 +0000 UTC,LastTransitionTime:2020-08-17 11:33:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-08-17 11:33:06 +0000 UTC,LastTransitionTime:2020-08-17 11:32:56 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 17 11:33:07.117: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-5115 /apis/apps/v1/namespaces/deployment-5115/replicasets/test-recreate-deployment-f79dd4667 84693c3b-bd5d-4663-90a4-b810c0face5d 708101 1 2020-08-17 11:33:06 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0b2d965a-e5e4-42c0-93b3-45fde4a68a6d 0x40069e7170 0x40069e7171}] [] [{kube-controller-manager Update apps/v1 2020-08-17 11:33:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b2d965a-e5e4-42c0-93b3-45fde4a68a6d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40069e71e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 17 11:33:07.117: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 17 11:33:07.119: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-5115 /apis/apps/v1/namespaces/deployment-5115/replicasets/test-recreate-deployment-c96cf48f ecacc484-4567-4e3f-9480-2253565d8b27 708092 2 2020-08-17 11:32:56 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 0b2d965a-e5e4-42c0-93b3-45fde4a68a6d 0x40069e707f 0x40069e7090}] [] [{kube-controller-manager Update apps/v1 2020-08-17 11:33:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b2d965a-e5e4-42c0-93b3-45fde4a68a6d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40069e7108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 17 11:33:07.180: INFO: Pod "test-recreate-deployment-f79dd4667-mpbt7" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-mpbt7 test-recreate-deployment-f79dd4667- deployment-5115 /api/v1/namespaces/deployment-5115/pods/test-recreate-deployment-f79dd4667-mpbt7 c30518c0-86bb-40da-91a7-6a35aada3a48 708105 0 2020-08-17 11:33:06 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 84693c3b-bd5d-4663-90a4-b810c0face5d 0x40069e7690 0x40069e7691}] [] [{kube-controller-manager Update v1 2020-08-17 11:33:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"84693c3b-bd5d-4663-90a4-b810c0face5d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 11:33:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ljkzk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ljkzk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ljkzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:33:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:33:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:33:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 11:33:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-17 11:33:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:33:07.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5115" for this suite. • [SLOW TEST:10.911 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":61,"skipped":980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:33:07.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 11:33:08.351: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c881b34f-8d26-45b1-a044-cc0c13b470c8" in namespace "downward-api-7980" to be "Succeeded or Failed" Aug 17 11:33:08.534: INFO: Pod "downwardapi-volume-c881b34f-8d26-45b1-a044-cc0c13b470c8": Phase="Pending", Reason="", readiness=false. Elapsed: 182.119155ms Aug 17 11:33:10.541: INFO: Pod "downwardapi-volume-c881b34f-8d26-45b1-a044-cc0c13b470c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189239s Aug 17 11:33:12.550: INFO: Pod "downwardapi-volume-c881b34f-8d26-45b1-a044-cc0c13b470c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198748215s Aug 17 11:33:14.558: INFO: Pod "downwardapi-volume-c881b34f-8d26-45b1-a044-cc0c13b470c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206015846s STEP: Saw pod success Aug 17 11:33:14.558: INFO: Pod "downwardapi-volume-c881b34f-8d26-45b1-a044-cc0c13b470c8" satisfied condition "Succeeded or Failed" Aug 17 11:33:14.564: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c881b34f-8d26-45b1-a044-cc0c13b470c8 container client-container: STEP: delete the pod Aug 17 11:33:14.621: INFO: Waiting for pod downwardapi-volume-c881b34f-8d26-45b1-a044-cc0c13b470c8 to disappear Aug 17 11:33:14.626: INFO: Pod downwardapi-volume-c881b34f-8d26-45b1-a044-cc0c13b470c8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:33:14.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7980" for this suite. • [SLOW TEST:7.239 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":62,"skipped":1046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:33:14.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-2867 STEP: creating replication controller nodeport-test in namespace services-2867 I0817 11:33:14.785571 10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2867, replica count: 2 I0817 11:33:17.837420 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:33:20.838058 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:33:23.838757 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 11:33:23.839: INFO: Creating new exec pod Aug 17 11:33:29.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2867 execpodzrxzk -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 17 11:33:30.663: INFO: stderr: "I0817 11:33:30.541277 647 log.go:181] (0x4000132c60) (0x4000b8c500) Create stream\nI0817 11:33:30.544583 647 log.go:181] (0x4000132c60) (0x4000b8c500) Stream added, broadcasting: 1\nI0817 11:33:30.558341 647 log.go:181] (0x4000132c60) Reply frame received for 1\nI0817 11:33:30.559477 647 log.go:181] (0x4000132c60) (0x4000b8c5a0) Create stream\nI0817 11:33:30.559573 647 log.go:181] (0x4000132c60) (0x4000b8c5a0) Stream added, broadcasting: 3\nI0817 11:33:30.561458 647 log.go:181] (0x4000132c60) Reply frame received for 3\nI0817 11:33:30.561716 647 log.go:181] (0x4000132c60) (0x4000250000) Create stream\nI0817 11:33:30.561771 647 log.go:181] (0x4000132c60) (0x4000250000) Stream added, broadcasting: 5\nI0817 11:33:30.562988 647 log.go:181] (0x4000132c60) Reply frame received for 5\nI0817 11:33:30.638206 647 log.go:181] (0x4000132c60) Data frame received for 5\nI0817 11:33:30.638374 647 log.go:181] (0x4000250000) (5) Data frame handling\nI0817 11:33:30.638701 647 log.go:181] (0x4000250000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0817 11:33:30.646146 647 log.go:181] (0x4000132c60) Data frame received for 5\nI0817 11:33:30.646204 647 log.go:181] (0x4000250000) (5) Data frame handling\nI0817 11:33:30.646270 647 log.go:181] (0x4000250000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0817 11:33:30.646342 647 log.go:181] (0x4000132c60) Data frame received for 5\nI0817 11:33:30.646403 647 log.go:181] (0x4000250000) (5) Data frame handling\nI0817 11:33:30.646480 647 log.go:181] (0x4000132c60) Data frame received for 3\nI0817 11:33:30.646544 647 log.go:181] (0x4000b8c5a0) (3) Data frame handling\nI0817 11:33:30.648326 647 log.go:181] (0x4000132c60) Data frame received for 1\nI0817 11:33:30.648408 647 log.go:181] (0x4000b8c500) (1) Data frame handling\nI0817 11:33:30.648474 647 log.go:181] (0x4000b8c500) (1) Data frame sent\nI0817 11:33:30.649584 647 log.go:181] (0x4000132c60) (0x4000b8c500) Stream removed, broadcasting: 1\nI0817 11:33:30.651501 647 log.go:181] (0x4000132c60) Go away received\nI0817 11:33:30.654821 647 log.go:181] (0x4000132c60) (0x4000b8c500) Stream removed, broadcasting: 1\nI0817 11:33:30.655166 647 log.go:181] (0x4000132c60) (0x4000b8c5a0) Stream removed, broadcasting: 3\nI0817 11:33:30.655409 647 log.go:181] (0x4000132c60) (0x4000250000) Stream removed, broadcasting: 5\n" Aug 17 11:33:30.664: INFO: stdout: "" Aug 17 11:33:30.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2867 execpodzrxzk -- /bin/sh -x -c nc -zv -t -w 2 10.108.242.140 80' Aug 17 11:33:32.385: INFO: stderr: "I0817 11:33:32.284208 667 log.go:181] (0x4000232370) (0x40005ee000) Create stream\nI0817 11:33:32.288018 667 log.go:181] (0x4000232370) (0x40005ee000) Stream added, broadcasting: 1\nI0817 11:33:32.298714 667 log.go:181] (0x4000232370) Reply frame received for 1\nI0817 11:33:32.299430 667 log.go:181] (0x4000232370) (0x40003986e0) Create stream\nI0817 11:33:32.299501 667 log.go:181] (0x4000232370) (0x40003986e0) Stream added, broadcasting: 3\nI0817 11:33:32.300931 667 log.go:181] (0x4000232370) Reply frame received for 3\nI0817 11:33:32.301232 667 log.go:181] (0x4000232370) (0x400091e3c0) Create stream\nI0817 11:33:32.301298 667 log.go:181] (0x4000232370) (0x400091e3c0) Stream added, broadcasting: 5\nI0817 11:33:32.302518 667 log.go:181] (0x4000232370) Reply frame received for 5\nI0817 11:33:32.364957 667 log.go:181] (0x4000232370) Data frame received for 1\nI0817 11:33:32.365318 667 log.go:181] (0x4000232370) Data frame received for 5\nI0817 11:33:32.365519 667 log.go:181] (0x40005ee000) (1) Data frame handling\nI0817 11:33:32.365658 667 log.go:181] (0x400091e3c0) (5) Data frame handling\nI0817 11:33:32.365833 667 log.go:181] (0x4000232370) Data frame received for 3\nI0817 11:33:32.365919 667 log.go:181] (0x40003986e0) (3) Data frame handling\n+ nc -zv -t -w 2 10.108.242.140 80\nConnection to 10.108.242.140 80 port [tcp/http] succeeded!\nI0817 11:33:32.367879 667 log.go:181] (0x400091e3c0) (5) Data frame sent\nI0817 11:33:32.368558 667 log.go:181] (0x4000232370) Data frame received for 5\nI0817 11:33:32.368657 667 log.go:181] (0x400091e3c0) (5) Data frame handling\nI0817 11:33:32.368856 667 log.go:181] (0x40005ee000) (1) Data frame sent\nI0817 11:33:32.370460 667 log.go:181] (0x4000232370) (0x40005ee000) Stream removed, broadcasting: 1\nI0817 11:33:32.371351 667 log.go:181] (0x4000232370) Go away received\nI0817 11:33:32.373820 667 log.go:181] (0x4000232370) (0x40005ee000) Stream removed, broadcasting: 1\nI0817 11:33:32.374537 667 log.go:181] (0x4000232370) (0x40003986e0) Stream removed, broadcasting: 3\nI0817 11:33:32.374815 667 log.go:181] (0x4000232370) (0x400091e3c0) Stream removed, broadcasting: 5\n" Aug 17 11:33:32.386: INFO: stdout: "" Aug 17 11:33:32.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2867 execpodzrxzk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31421' Aug 17 11:33:34.034: INFO: stderr: "I0817 11:33:33.918372 687 log.go:181] (0x400014c370) (0x400099a000) Create stream\nI0817 11:33:33.922733 687 log.go:181] (0x400014c370) (0x400099a000) Stream added, broadcasting: 1\nI0817 11:33:33.934812 687 log.go:181] (0x400014c370) Reply frame received for 1\nI0817 11:33:33.935721 687 log.go:181] (0x400014c370) (0x400080e320) Create stream\nI0817 11:33:33.935802 687 log.go:181] (0x400014c370) (0x400080e320) Stream added, broadcasting: 3\nI0817 11:33:33.937654 687 log.go:181] (0x400014c370) Reply frame received for 3\nI0817 11:33:33.938004 687 log.go:181] (0x400014c370) (0x400080ee60) Create stream\nI0817 11:33:33.938083 687 log.go:181] (0x400014c370) (0x400080ee60) Stream added, broadcasting: 5\nI0817 11:33:33.939367 687 log.go:181] (0x400014c370) Reply frame received for 5\nI0817 11:33:34.011620 687 log.go:181] (0x400014c370) Data frame received for 3\nI0817 11:33:34.011832 687 log.go:181] (0x400080e320) (3) Data frame handling\nI0817 11:33:34.011971 687 log.go:181] (0x400014c370) Data frame received for 5\nI0817 11:33:34.012172 687 log.go:181] (0x400080ee60) (5) Data frame handling\nI0817 11:33:34.012401 687 log.go:181] (0x400014c370) Data frame received for 1\nI0817 11:33:34.012496 687 log.go:181] (0x400099a000) (1) Data frame handling\nI0817 11:33:34.014099 687 log.go:181] (0x400099a000) (1) Data frame sent\nI0817 11:33:34.014451 687 log.go:181] (0x400080ee60) (5) Data frame sent\nI0817 11:33:34.014614 687 log.go:181] (0x400014c370) Data frame received for 5\nI0817 11:33:34.014981 687 log.go:181] (0x400014c370) (0x400099a000) Stream removed, broadcasting: 1\n+ nc -zv -t -w 2 172.18.0.11 31421\nConnection to 172.18.0.11 31421 port [tcp/31421] succeeded!\nI0817 11:33:34.015900 687 log.go:181] (0x400080ee60) (5) Data frame handling\nI0817 11:33:34.018361 687 log.go:181] (0x400014c370) Go away received\nI0817 11:33:34.022194 687 log.go:181] (0x400014c370) (0x400099a000) Stream removed, broadcasting: 1\nI0817 11:33:34.022560 687 log.go:181] (0x400014c370) (0x400080e320) Stream removed, broadcasting: 3\nI0817 11:33:34.022775 687 log.go:181] (0x400014c370) (0x400080ee60) Stream removed, broadcasting: 5\n" Aug 17 11:33:34.035: INFO: stdout: "" Aug 17 11:33:34.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2867 execpodzrxzk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31421' Aug 17 11:33:35.979: INFO: stderr: "I0817 11:33:35.876062 708 log.go:181] (0x40009a2000) (0x4000bc6280) Create stream\nI0817 11:33:35.878673 708 log.go:181] (0x40009a2000) (0x4000bc6280) Stream added, broadcasting: 1\nI0817 11:33:35.892218 708 log.go:181] (0x40009a2000) Reply frame received for 1\nI0817 11:33:35.893062 708 log.go:181] (0x40009a2000) (0x4000bc6fa0) Create stream\nI0817 11:33:35.893140 708 log.go:181] (0x40009a2000) (0x4000bc6fa0) Stream added, broadcasting: 3\nI0817 11:33:35.895000 708 log.go:181] (0x40009a2000) Reply frame received for 3\nI0817 11:33:35.895542 708 log.go:181] (0x40009a2000) (0x4000a04a00) Create stream\nI0817 11:33:35.895661 708 log.go:181] (0x40009a2000) (0x4000a04a00) Stream added, broadcasting: 5\nI0817 11:33:35.897211 708 log.go:181] (0x40009a2000) Reply frame received for 5\nI0817 11:33:35.956172 708 log.go:181] (0x40009a2000) Data frame received for 3\nI0817 11:33:35.956488 708 log.go:181] (0x40009a2000) Data frame received for 1\nI0817 11:33:35.956647 708 log.go:181] (0x4000bc6fa0) (3) Data frame handling\nI0817 11:33:35.956984 708 log.go:181] (0x4000bc6280) (1) Data frame handling\nI0817 11:33:35.957179 708 log.go:181] (0x40009a2000) Data frame received for 5\nI0817 11:33:35.957277 708 log.go:181] (0x4000a04a00) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31421\nConnection to 172.18.0.14 31421 port [tcp/31421] succeeded!\nI0817 11:33:35.959527 708 log.go:181] (0x4000bc6280) (1) Data frame sent\nI0817 11:33:35.960030 708 log.go:181] (0x4000a04a00) (5) Data frame sent\nI0817 11:33:35.960113 708 log.go:181] (0x40009a2000) Data frame received for 5\nI0817 11:33:35.960177 708 log.go:181] (0x4000a04a00) (5) Data frame handling\nI0817 11:33:35.961793 708 log.go:181] (0x40009a2000) (0x4000bc6280) Stream removed, broadcasting: 1\nI0817 11:33:35.963252 708 log.go:181] (0x40009a2000) Go away received\nI0817 11:33:35.968646 708 log.go:181] (0x40009a2000) (0x4000bc6280) Stream removed, broadcasting: 1\nI0817 11:33:35.969172 708 log.go:181] (0x40009a2000) (0x4000bc6fa0) Stream removed, broadcasting: 3\nI0817 11:33:35.969421 708 log.go:181] (0x40009a2000) (0x4000a04a00) Stream removed, broadcasting: 5\n" Aug 17 11:33:35.980: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:33:35.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2867" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:21.422 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":63,"skipped":1083,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:33:36.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-f92eaf42-663c-4f68-8d70-cc4e1226db1d STEP: Creating a pod to test consume configMaps Aug 17 11:33:36.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0ddee37-c865-4544-8e1f-0cb579544524" in namespace "configmap-2913" to be "Succeeded or Failed" Aug 17 11:33:36.833: INFO: Pod "pod-configmaps-e0ddee37-c865-4544-8e1f-0cb579544524": Phase="Pending", Reason="", readiness=false. Elapsed: 362.833936ms Aug 17 11:33:38.841: INFO: Pod "pod-configmaps-e0ddee37-c865-4544-8e1f-0cb579544524": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370553839s Aug 17 11:33:40.847: INFO: Pod "pod-configmaps-e0ddee37-c865-4544-8e1f-0cb579544524": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37674963s Aug 17 11:33:42.886: INFO: Pod "pod-configmaps-e0ddee37-c865-4544-8e1f-0cb579544524": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.415277388s STEP: Saw pod success Aug 17 11:33:42.886: INFO: Pod "pod-configmaps-e0ddee37-c865-4544-8e1f-0cb579544524" satisfied condition "Succeeded or Failed" Aug 17 11:33:42.898: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e0ddee37-c865-4544-8e1f-0cb579544524 container configmap-volume-test: STEP: delete the pod Aug 17 11:33:43.286: INFO: Waiting for pod pod-configmaps-e0ddee37-c865-4544-8e1f-0cb579544524 to disappear Aug 17 11:33:43.482: INFO: Pod pod-configmaps-e0ddee37-c865-4544-8e1f-0cb579544524 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:33:43.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2913" for this suite. • [SLOW TEST:7.498 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":1090,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:33:43.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-lmv5 STEP: Creating a pod to test atomic-volume-subpath Aug 17 11:33:43.755: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lmv5" in namespace "subpath-9817" to be "Succeeded or Failed" Aug 17 11:33:43.761: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.707923ms Aug 17 11:33:45.767: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011717312s Aug 17 11:33:47.873: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117644401s Aug 17 11:33:49.878: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 6.122947239s Aug 17 11:33:51.883: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 8.127973324s Aug 17 11:33:53.889: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 10.133514008s Aug 17 11:33:55.964: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 12.208722033s Aug 17 11:33:57.970: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 14.21405125s Aug 17 11:33:59.974: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 16.21850115s Aug 17 11:34:01.981: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 18.225298202s Aug 17 11:34:03.988: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 20.232738584s Aug 17 11:34:06.010: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 22.254189094s Aug 17 11:34:08.015: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Running", Reason="", readiness=true. Elapsed: 24.259573836s Aug 17 11:34:10.514: INFO: Pod "pod-subpath-test-configmap-lmv5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.758062564s STEP: Saw pod success Aug 17 11:34:10.514: INFO: Pod "pod-subpath-test-configmap-lmv5" satisfied condition "Succeeded or Failed" Aug 17 11:34:10.534: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-lmv5 container test-container-subpath-configmap-lmv5: STEP: delete the pod Aug 17 11:34:10.682: INFO: Waiting for pod pod-subpath-test-configmap-lmv5 to disappear Aug 17 11:34:10.760: INFO: Pod pod-subpath-test-configmap-lmv5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-lmv5 Aug 17 11:34:10.761: INFO: Deleting pod "pod-subpath-test-configmap-lmv5" in namespace "subpath-9817" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:34:10.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9817" for this suite. • [SLOW TEST:27.425 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":65,"skipped":1106,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:34:10.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-291.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-291.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-291.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-291.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-291.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-291.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 11:34:19.789: INFO: DNS probes using dns-291/dns-test-53841e44-3de5-433c-bbab-0dbbd8733b59 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:34:20.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-291" for this suite. • [SLOW TEST:9.984 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":66,"skipped":1113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:34:20.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Aug 17 11:34:21.995: INFO: created test-podtemplate-1 Aug 17 11:34:22.376: INFO: created test-podtemplate-2 Aug 17 11:34:22.382: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Aug 17 11:34:22.447: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Aug 17 11:34:22.642: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:34:22.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7111" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":67,"skipped":1153,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:34:22.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 17 11:34:23.313: INFO: PodSpec: initContainers in spec.initContainers Aug 17 11:35:16.783: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-86b585dc-ea40-48a5-ba9d-3aca33b8c96b", GenerateName:"", Namespace:"init-container-7170", SelfLink:"/api/v1/namespaces/init-container-7170/pods/pod-init-86b585dc-ea40-48a5-ba9d-3aca33b8c96b", UID:"89f98ae7-4b30-4c5b-9a50-8302d91a08ad", ResourceVersion:"708760", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733260863, loc:(*time.Location)(0x6e4f160)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"312256023"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x4004ede2a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4004ede2c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x4004ede2e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4004ede300)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-t4gg7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4005e7a200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t4gg7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t4gg7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t4gg7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400371a4d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400081c5b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x400371a560)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x400371a580)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x400371a588), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x400371a58c), PreemptionPolicy:(*v1.PreemptionPolicy)(0x400354e360), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260863, loc:(*time.Location)(0x6e4f160)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260863, loc:(*time.Location)(0x6e4f160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260863, loc:(*time.Location)(0x6e4f160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733260863, loc:(*time.Location)(0x6e4f160)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.14", PodIP:"10.244.1.214", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.214"}}, StartTime:(*v1.Time)(0x4004ede320), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x4004ede360), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x400081c690)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d3873e44e413059840a37555dd2ecf4c0f4cad70affe08d48de96d1708420295", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4004ede380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4004ede340), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x400371a60f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:35:16.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7170" for this suite. • [SLOW TEST:54.157 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":68,"skipped":1172,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:35:16.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 11:35:17.205: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afcd3888-cd1c-4f7f-8a5e-88ae882cd5f4" in namespace "downward-api-7740" to be "Succeeded or Failed" Aug 17 11:35:17.221: INFO: Pod "downwardapi-volume-afcd3888-cd1c-4f7f-8a5e-88ae882cd5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.805793ms Aug 17 11:35:19.408: INFO: Pod "downwardapi-volume-afcd3888-cd1c-4f7f-8a5e-88ae882cd5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201976223s Aug 17 11:35:21.417: INFO: Pod "downwardapi-volume-afcd3888-cd1c-4f7f-8a5e-88ae882cd5f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.210955924s STEP: Saw pod success Aug 17 11:35:21.417: INFO: Pod "downwardapi-volume-afcd3888-cd1c-4f7f-8a5e-88ae882cd5f4" satisfied condition "Succeeded or Failed" Aug 17 11:35:21.422: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-afcd3888-cd1c-4f7f-8a5e-88ae882cd5f4 container client-container: STEP: delete the pod Aug 17 11:35:21.475: INFO: Waiting for pod downwardapi-volume-afcd3888-cd1c-4f7f-8a5e-88ae882cd5f4 to disappear Aug 17 11:35:21.495: INFO: Pod downwardapi-volume-afcd3888-cd1c-4f7f-8a5e-88ae882cd5f4 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:35:21.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7740" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":69,"skipped":1173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:35:21.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:35:21.593: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5790 I0817 11:35:21.681094 10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5790, replica count: 1 I0817 11:35:22.733210 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:35:23.733872 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:35:24.734573 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:35:25.735231 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:35:26.735938 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 11:35:27.069: INFO: Created: latency-svc-zdbhm Aug 17 11:35:27.148: INFO: Got endpoints: latency-svc-zdbhm [310.213671ms] Aug 17 11:35:27.420: INFO: Created: latency-svc-8wbsg Aug 17 11:35:27.430: INFO: Got endpoints: latency-svc-8wbsg [280.245042ms] Aug 17 11:35:27.511: INFO: Created: latency-svc-nzvsd Aug 17 11:35:27.587: INFO: Got endpoints: latency-svc-nzvsd [436.513876ms] Aug 17 11:35:27.643: INFO: Created: latency-svc-ltxvw Aug 17 11:35:27.683: INFO: Got endpoints: latency-svc-ltxvw [530.736957ms] Aug 17 11:35:27.758: INFO: Created: latency-svc-ldpjc Aug 17 11:35:27.772: INFO: Got endpoints: latency-svc-ldpjc [622.418817ms] Aug 17 11:35:27.862: INFO: Created: latency-svc-pkhtr Aug 17 11:35:27.878: INFO: Got endpoints: latency-svc-pkhtr [726.1967ms] Aug 17 11:35:27.915: INFO: Created: latency-svc-kks6t Aug 17 11:35:27.923: INFO: Got endpoints: latency-svc-kks6t [772.933679ms] Aug 17 11:35:28.038: INFO: Created: latency-svc-9t7vd Aug 17 11:35:28.054: INFO: Got endpoints: latency-svc-9t7vd [903.017437ms] Aug 17 11:35:28.104: INFO: Created: latency-svc-5qlxd Aug 17 11:35:28.127: INFO: Got endpoints: latency-svc-5qlxd [977.440269ms] Aug 17 11:35:28.385: INFO: Created: latency-svc-jqhkk Aug 17 11:35:28.414: INFO: Got endpoints: latency-svc-jqhkk [1.2618309s] Aug 17 11:35:28.556: INFO: Created: latency-svc-26dvz Aug 17 11:35:28.565: INFO: Got endpoints: latency-svc-26dvz [1.413679851s] Aug 17 11:35:28.610: INFO: Created: latency-svc-mzv2c Aug 17 11:35:28.780: INFO: Got endpoints: latency-svc-mzv2c [1.630265115s] Aug 17 11:35:28.880: INFO: Created: latency-svc-xl7mz Aug 17 11:35:28.932: INFO: Got endpoints: latency-svc-xl7mz [1.781221357s] Aug 17 11:35:29.051: INFO: Created: latency-svc-479x4 Aug 17 11:35:29.095: INFO: Got endpoints: latency-svc-479x4 [1.943240998s] Aug 17 11:35:29.099: INFO: Created: latency-svc-q4x6x Aug 17 11:35:29.111: INFO: Got endpoints: latency-svc-q4x6x [1.961061169s] Aug 17 11:35:29.213: INFO: Created: latency-svc-zmkcc Aug 17 11:35:29.219: INFO: Got endpoints: latency-svc-zmkcc [2.066753357s] Aug 17 11:35:29.243: INFO: Created: latency-svc-wc5qm Aug 17 11:35:29.261: INFO: Got endpoints: latency-svc-wc5qm [1.830457845s] Aug 17 11:35:29.304: INFO: Created: latency-svc-gw6qh Aug 17 11:35:29.456: INFO: Got endpoints: latency-svc-gw6qh [1.869158115s] Aug 17 11:35:29.770: INFO: Created: latency-svc-wnmcs Aug 17 11:35:29.783: INFO: Got endpoints: latency-svc-wnmcs [2.100376183s] Aug 17 11:35:29.868: INFO: Created: latency-svc-hc8q8 Aug 17 11:35:29.886: INFO: Got endpoints: latency-svc-hc8q8 [2.114023687s] Aug 17 11:35:29.922: INFO: Created: latency-svc-pchmp Aug 17 11:35:29.930: INFO: Got endpoints: latency-svc-pchmp [2.051927168s] Aug 17 11:35:29.987: INFO: Created: latency-svc-tbpn7 Aug 17 11:35:30.010: INFO: Got endpoints: latency-svc-tbpn7 [2.087544717s] Aug 17 11:35:30.037: INFO: Created: latency-svc-7pmpp Aug 17 11:35:30.052: INFO: Got endpoints: latency-svc-7pmpp [1.997401484s] Aug 17 11:35:30.109: INFO: Created: latency-svc-g4j2v Aug 17 11:35:30.123: INFO: Got endpoints: latency-svc-g4j2v [1.995601544s] Aug 17 11:35:30.141: INFO: Created: latency-svc-42d5b Aug 17 11:35:30.153: INFO: Got endpoints: latency-svc-42d5b [1.739127856s] Aug 17 11:35:30.271: INFO: Created: latency-svc-wg7br Aug 17 11:35:30.283: INFO: Got endpoints: latency-svc-wg7br [1.718348191s] Aug 17 11:35:30.324: INFO: Created: latency-svc-s6dkw Aug 17 11:35:30.334: INFO: Got endpoints: latency-svc-s6dkw [1.553534902s] Aug 17 11:35:30.407: INFO: Created: latency-svc-tsrq8 Aug 17 11:35:30.411: INFO: Got endpoints: latency-svc-tsrq8 [1.478203768s] Aug 17 11:35:30.459: INFO: Created: latency-svc-jw2mz Aug 17 11:35:30.490: INFO: Got endpoints: latency-svc-jw2mz [1.39463017s] Aug 17 11:35:30.566: INFO: Created: latency-svc-lcllh Aug 17 11:35:30.585: INFO: Got endpoints: latency-svc-lcllh [1.473250544s] Aug 17 11:35:30.643: INFO: Created: latency-svc-tsj5s Aug 17 11:35:30.653: INFO: Got endpoints: latency-svc-tsj5s [1.434189761s] Aug 17 11:35:30.744: INFO: Created: latency-svc-r6bck Aug 17 11:35:30.755: INFO: Got endpoints: latency-svc-r6bck [1.493576032s] Aug 17 11:35:30.789: INFO: Created: latency-svc-646c8 Aug 17 11:35:30.804: INFO: Got endpoints: latency-svc-646c8 [1.347339039s] Aug 17 11:35:30.840: INFO: Created: latency-svc-2l5p6 Aug 17 11:35:30.928: INFO: Got endpoints: latency-svc-2l5p6 [1.144348696s] Aug 17 11:35:30.969: INFO: Created: latency-svc-jrdvs Aug 17 11:35:30.984: INFO: Got endpoints: latency-svc-jrdvs [1.097368267s] Aug 17 11:35:31.086: INFO: Created: latency-svc-q75cl Aug 17 11:35:31.102: INFO: Got endpoints: latency-svc-q75cl [1.171614464s] Aug 17 11:35:31.165: INFO: Created: latency-svc-bwq7z Aug 17 11:35:31.403: INFO: Got endpoints: latency-svc-bwq7z [1.39274245s] Aug 17 11:35:31.748: INFO: Created: latency-svc-46cb6 Aug 17 11:35:31.782: INFO: Got endpoints: latency-svc-46cb6 [1.729841754s] Aug 17 11:35:31.808: INFO: Created: latency-svc-slfdl Aug 17 11:35:31.824: INFO: Got endpoints: latency-svc-slfdl [1.700724271s] Aug 17 11:35:31.932: INFO: Created: latency-svc-n855j Aug 17 11:35:31.947: INFO: Got endpoints: latency-svc-n855j [1.793034543s] Aug 17 11:35:32.030: INFO: Created: latency-svc-ptnb6 Aug 17 11:35:32.109: INFO: Got endpoints: latency-svc-ptnb6 [1.825807603s] Aug 17 11:35:32.114: INFO: Created: latency-svc-ctq4v Aug 17 11:35:32.127: INFO: Got endpoints: latency-svc-ctq4v [1.792333145s] Aug 17 11:35:32.166: INFO: Created: latency-svc-vtjml Aug 17 11:35:32.181: INFO: Got endpoints: latency-svc-vtjml [1.770520535s] Aug 17 11:35:32.269: INFO: Created: latency-svc-pdldj Aug 17 11:35:32.274: INFO: Got endpoints: latency-svc-pdldj [1.783533671s] Aug 17 11:35:32.300: INFO: Created: latency-svc-rhr8c Aug 17 11:35:32.314: INFO: Got endpoints: latency-svc-rhr8c [1.72890251s] Aug 17 11:35:32.329: INFO: Created: latency-svc-mtkwq Aug 17 11:35:32.344: INFO: Got endpoints: latency-svc-mtkwq [1.690971762s] Aug 17 11:35:32.360: INFO: Created: latency-svc-xjjdm Aug 17 11:35:32.458: INFO: Got endpoints: latency-svc-xjjdm [1.702976592s] Aug 17 11:35:32.458: INFO: Created: latency-svc-xxxrj Aug 17 11:35:32.464: INFO: Got endpoints: latency-svc-xxxrj [1.65965932s] Aug 17 11:35:32.547: INFO: Created: latency-svc-wjltj Aug 17 11:35:32.600: INFO: Got endpoints: latency-svc-wjltj [1.671464849s] Aug 17 11:35:32.613: INFO: Created: latency-svc-pcrp7 Aug 17 11:35:32.633: INFO: Got endpoints: latency-svc-pcrp7 [1.649116601s] Aug 17 11:35:32.675: INFO: Created: latency-svc-q7p4z Aug 17 11:35:32.742: INFO: Got endpoints: latency-svc-q7p4z [1.639604462s] Aug 17 11:35:32.756: INFO: Created: latency-svc-vv92c Aug 17 11:35:32.786: INFO: Got endpoints: latency-svc-vv92c [1.382361887s] Aug 17 11:35:32.816: INFO: Created: latency-svc-7z9fc Aug 17 11:35:32.833: INFO: Got endpoints: latency-svc-7z9fc [1.05067565s] Aug 17 11:35:32.905: INFO: Created: latency-svc-csqcg Aug 17 11:35:32.908: INFO: Got endpoints: latency-svc-csqcg [1.083677051s] Aug 17 11:35:32.958: INFO: Created: latency-svc-cm7c4 Aug 17 11:35:32.971: INFO: Got endpoints: latency-svc-cm7c4 [1.02453095s] Aug 17 11:35:32.996: INFO: Created: latency-svc-bc49s Aug 17 11:35:33.060: INFO: Got endpoints: latency-svc-bc49s [950.060568ms] Aug 17 11:35:33.087: INFO: Created: latency-svc-7fxn8 Aug 17 11:35:33.097: INFO: Got endpoints: latency-svc-7fxn8 [970.25658ms] Aug 17 11:35:33.114: INFO: Created: latency-svc-qwr7v Aug 17 11:35:33.127: INFO: Got endpoints: latency-svc-qwr7v [945.37268ms] Aug 17 11:35:33.143: INFO: Created: latency-svc-47k5d Aug 17 11:35:33.157: INFO: Got endpoints: latency-svc-47k5d [883.403799ms] Aug 17 11:35:33.203: INFO: Created: latency-svc-5kkkx Aug 17 11:35:33.242: INFO: Created: latency-svc-c6654 Aug 17 11:35:33.244: INFO: Got endpoints: latency-svc-5kkkx [929.273924ms] Aug 17 11:35:33.285: INFO: Got endpoints: latency-svc-c6654 [940.808014ms] Aug 17 11:35:33.395: INFO: Created: latency-svc-lfh5x Aug 17 11:35:33.400: INFO: Got endpoints: latency-svc-lfh5x [941.772117ms] Aug 17 11:35:33.431: INFO: Created: latency-svc-r64gw Aug 17 11:35:33.463: INFO: Got endpoints: latency-svc-r64gw [998.625529ms] Aug 17 11:35:33.494: INFO: Created: latency-svc-dhcsx Aug 17 11:35:33.557: INFO: Got endpoints: latency-svc-dhcsx [956.979494ms] Aug 17 11:35:33.560: INFO: Created: latency-svc-6sp9h Aug 17 11:35:33.570: INFO: Got endpoints: latency-svc-6sp9h [936.917484ms] Aug 17 11:35:33.589: INFO: Created: latency-svc-phlk7 Aug 17 11:35:33.611: INFO: Got endpoints: latency-svc-phlk7 [868.890933ms] Aug 17 11:35:33.641: INFO: Created: latency-svc-vpvtt Aug 17 11:35:33.649: INFO: Got endpoints: latency-svc-vpvtt [862.893894ms] Aug 17 11:35:33.719: INFO: Created: latency-svc-lqksb Aug 17 11:35:33.758: INFO: Created: latency-svc-6f7zg Aug 17 11:35:33.759: INFO: Got endpoints: latency-svc-lqksb [926.201339ms] Aug 17 11:35:33.782: INFO: Got endpoints: latency-svc-6f7zg [873.511604ms] Aug 17 11:35:33.817: INFO: Created: latency-svc-kztzk Aug 17 11:35:33.857: INFO: Got endpoints: latency-svc-kztzk [885.664689ms] Aug 17 11:35:33.882: INFO: Created: latency-svc-zc9ks Aug 17 11:35:33.898: INFO: Got endpoints: latency-svc-zc9ks [837.674102ms] Aug 17 11:35:33.930: INFO: Created: latency-svc-8w77s Aug 17 11:35:33.945: INFO: Got endpoints: latency-svc-8w77s [848.121812ms] Aug 17 11:35:34.007: INFO: Created: latency-svc-hjkpp Aug 17 11:35:34.011: INFO: Got endpoints: latency-svc-hjkpp [884.067816ms] Aug 17 11:35:34.035: INFO: Created: latency-svc-7sd2w Aug 17 11:35:34.059: INFO: Got endpoints: latency-svc-7sd2w [901.565ms] Aug 17 11:35:34.093: INFO: Created: latency-svc-5296p Aug 17 11:35:34.102: INFO: Got endpoints: latency-svc-5296p [858.064082ms] Aug 17 11:35:34.163: INFO: Created: latency-svc-7g6r2 Aug 17 11:35:34.165: INFO: Got endpoints: latency-svc-7g6r2 [879.42718ms] Aug 17 11:35:34.207: INFO: Created: latency-svc-8qftx Aug 17 11:35:34.331: INFO: Got endpoints: latency-svc-8qftx [930.490895ms] Aug 17 11:35:34.331: INFO: Created: latency-svc-wxvkq Aug 17 11:35:34.337: INFO: Got endpoints: latency-svc-wxvkq [873.827309ms] Aug 17 11:35:34.380: INFO: Created: latency-svc-gjvxv Aug 17 11:35:34.391: INFO: Got endpoints: latency-svc-gjvxv [834.267252ms] Aug 17 11:35:34.408: INFO: Created: latency-svc-p4j2n Aug 17 11:35:34.473: INFO: Got endpoints: latency-svc-p4j2n [902.793202ms] Aug 17 11:35:34.484: INFO: Created: latency-svc-t2zg7 Aug 17 11:35:34.500: INFO: Got endpoints: latency-svc-t2zg7 [888.724486ms] Aug 17 11:35:34.520: INFO: Created: latency-svc-w5d9c Aug 17 11:35:34.542: INFO: Got endpoints: latency-svc-w5d9c [892.938353ms] Aug 17 11:35:34.560: INFO: Created: latency-svc-2fdw7 Aug 17 11:35:34.637: INFO: Created: latency-svc-hhsb6 Aug 17 11:35:34.638: INFO: Got endpoints: latency-svc-2fdw7 [878.525472ms] Aug 17 11:35:34.641: INFO: Got endpoints: latency-svc-hhsb6 [859.292494ms] Aug 17 11:35:34.706: INFO: Created: latency-svc-cql9m Aug 17 11:35:34.719: INFO: Got endpoints: latency-svc-cql9m [861.436055ms] Aug 17 11:35:34.780: INFO: Created: latency-svc-4v4n2 Aug 17 11:35:34.812: INFO: Got endpoints: latency-svc-4v4n2 [913.748448ms] Aug 17 11:35:34.842: INFO: Created: latency-svc-92bw9 Aug 17 11:35:34.857: INFO: Got endpoints: latency-svc-92bw9 [911.244833ms] Aug 17 11:35:34.877: INFO: Created: latency-svc-2z7qc Aug 17 11:35:34.934: INFO: Got endpoints: latency-svc-2z7qc [922.645949ms] Aug 17 11:35:34.989: INFO: Created: latency-svc-nj74d Aug 17 11:35:35.001: INFO: Got endpoints: latency-svc-nj74d [941.38026ms] Aug 17 11:35:35.079: INFO: Created: latency-svc-5hbnl Aug 17 11:35:35.083: INFO: Got endpoints: latency-svc-5hbnl [981.42829ms] Aug 17 11:35:35.105: INFO: Created: latency-svc-mx87f Aug 17 11:35:35.115: INFO: Got endpoints: latency-svc-mx87f [949.630988ms] Aug 17 11:35:35.137: INFO: Created: latency-svc-kpxxn Aug 17 11:35:35.160: INFO: Got endpoints: latency-svc-kpxxn [829.51215ms] Aug 17 11:35:35.234: INFO: Created: latency-svc-48j8s Aug 17 11:35:35.264: INFO: Got endpoints: latency-svc-48j8s [927.000964ms] Aug 17 11:35:35.288: INFO: Created: latency-svc-25879 Aug 17 11:35:35.296: INFO: Got endpoints: latency-svc-25879 [904.061291ms] Aug 17 11:35:35.320: INFO: Created: latency-svc-knx8w Aug 17 11:35:35.420: INFO: Got endpoints: latency-svc-knx8w [946.372453ms] Aug 17 11:35:35.422: INFO: Created: latency-svc-8bsrn Aug 17 11:35:35.428: INFO: Got endpoints: latency-svc-8bsrn [928.1757ms] Aug 17 11:35:35.455: INFO: Created: latency-svc-mql8r Aug 17 11:35:35.472: INFO: Got endpoints: latency-svc-mql8r [929.469215ms] Aug 17 11:35:35.496: INFO: Created: latency-svc-8lfv9 Aug 17 11:35:35.501: INFO: Got endpoints: latency-svc-8lfv9 [863.167825ms] Aug 17 11:35:35.576: INFO: Created: latency-svc-7qj9h Aug 17 11:35:35.578: INFO: Got endpoints: latency-svc-7qj9h [936.747951ms] Aug 17 11:35:35.621: INFO: Created: latency-svc-b5248 Aug 17 11:35:35.635: INFO: Got endpoints: latency-svc-b5248 [915.772246ms] Aug 17 11:35:35.650: INFO: Created: latency-svc-rhlpp Aug 17 11:35:35.664: INFO: Got endpoints: latency-svc-rhlpp [851.73601ms] Aug 17 11:35:35.731: INFO: Created: latency-svc-l28zn Aug 17 11:35:35.737: INFO: Got endpoints: latency-svc-l28zn [880.281233ms] Aug 17 11:35:35.766: INFO: Created: latency-svc-zqp8c Aug 17 11:35:35.784: INFO: Got endpoints: latency-svc-zqp8c [849.76677ms] Aug 17 11:35:35.803: INFO: Created: latency-svc-9g4ks Aug 17 11:35:35.815: INFO: Got endpoints: latency-svc-9g4ks [814.266722ms] Aug 17 11:35:35.904: INFO: Created: latency-svc-hcmgv Aug 17 11:35:35.909: INFO: Got endpoints: latency-svc-hcmgv [825.56829ms] Aug 17 11:35:36.073: INFO: Created: latency-svc-mqpg6 Aug 17 11:35:36.092: INFO: Got endpoints: latency-svc-mqpg6 [976.628971ms] Aug 17 11:35:36.095: INFO: Created: latency-svc-qqblf Aug 17 11:35:36.115: INFO: Got endpoints: latency-svc-qqblf [954.079712ms] Aug 17 11:35:36.252: INFO: Created: latency-svc-5j785 Aug 17 11:35:36.263: INFO: Got endpoints: latency-svc-5j785 [998.648658ms] Aug 17 11:35:36.302: INFO: Created: latency-svc-dgdmq Aug 17 11:35:36.313: INFO: Got endpoints: latency-svc-dgdmq [1.017396892s] Aug 17 11:35:36.403: INFO: Created: latency-svc-wzldx Aug 17 11:35:36.416: INFO: Got endpoints: latency-svc-wzldx [995.97688ms] Aug 17 11:35:36.440: INFO: Created: latency-svc-rmfdd Aug 17 11:35:36.477: INFO: Got endpoints: latency-svc-rmfdd [1.048295987s] Aug 17 11:35:36.545: INFO: Created: latency-svc-qtw9q Aug 17 11:35:36.556: INFO: Got endpoints: latency-svc-qtw9q [1.084226316s] Aug 17 11:35:36.977: INFO: Created: latency-svc-22r5g Aug 17 11:35:36.977: INFO: Got endpoints: latency-svc-22r5g [1.475880531s] Aug 17 11:35:37.215: INFO: Created: latency-svc-fvqnb Aug 17 11:35:37.278: INFO: Created: latency-svc-2wwqc Aug 17 11:35:37.279: INFO: Got endpoints: latency-svc-fvqnb [1.700399347s] Aug 17 11:35:37.312: INFO: Got endpoints: latency-svc-2wwqc [1.677488075s] Aug 17 11:35:37.402: INFO: Created: latency-svc-wfdzk Aug 17 11:35:37.425: INFO: Got endpoints: latency-svc-wfdzk [1.76080321s] Aug 17 11:35:37.465: INFO: Created: latency-svc-26twf Aug 17 11:35:37.478: INFO: Got endpoints: latency-svc-26twf [1.740844032s] Aug 17 11:35:37.539: INFO: Created: latency-svc-dw5r6 Aug 17 11:35:37.552: INFO: Got endpoints: latency-svc-dw5r6 [1.767145282s] Aug 17 11:35:37.577: INFO: Created: latency-svc-zbfkv Aug 17 11:35:37.619: INFO: Got endpoints: latency-svc-zbfkv [1.803706119s] Aug 17 11:35:37.738: INFO: Created: latency-svc-hd8qm Aug 17 11:35:37.756: INFO: Got endpoints: latency-svc-hd8qm [1.846343793s] Aug 17 11:35:37.812: INFO: Created: latency-svc-4brtd Aug 17 11:35:38.098: INFO: Got endpoints: latency-svc-4brtd [2.005977973s] Aug 17 11:35:38.564: INFO: Created: latency-svc-stxgs Aug 17 11:35:38.568: INFO: Got endpoints: latency-svc-stxgs [2.453503859s] Aug 17 11:35:38.815: INFO: Created: latency-svc-8z6nh Aug 17 11:35:39.007: INFO: Got endpoints: latency-svc-8z6nh [2.743608062s] Aug 17 11:35:39.007: INFO: Created: latency-svc-fw2ms Aug 17 11:35:39.019: INFO: Got endpoints: latency-svc-fw2ms [2.70539586s] Aug 17 11:35:39.234: INFO: Created: latency-svc-966jh Aug 17 11:35:39.271: INFO: Got endpoints: latency-svc-966jh [2.854856189s] Aug 17 11:35:39.408: INFO: Created: latency-svc-jc86j Aug 17 11:35:39.427: INFO: Got endpoints: latency-svc-jc86j [2.950507403s] Aug 17 11:35:39.598: INFO: Created: latency-svc-48cv6 Aug 17 11:35:39.614: INFO: Got endpoints: latency-svc-48cv6 [3.057721846s] Aug 17 11:35:39.646: INFO: Created: latency-svc-qmlh5 Aug 17 11:35:39.656: INFO: Got endpoints: latency-svc-qmlh5 [2.678203096s] Aug 17 11:35:39.749: INFO: Created: latency-svc-vrpdp Aug 17 11:35:39.754: INFO: Got endpoints: latency-svc-vrpdp [2.474837363s] Aug 17 11:35:39.823: INFO: Created: latency-svc-n2gbr Aug 17 11:35:40.012: INFO: Got endpoints: latency-svc-n2gbr [2.699732116s] Aug 17 11:35:40.312: INFO: Created: latency-svc-pz52l Aug 17 11:35:40.354: INFO: Got endpoints: latency-svc-pz52l [2.929491765s] Aug 17 11:35:40.393: INFO: Created: latency-svc-prcwf Aug 17 11:35:40.406: INFO: Got endpoints: latency-svc-prcwf [2.927966782s] Aug 17 11:35:40.484: INFO: Created: latency-svc-78gnx Aug 17 11:35:40.493: INFO: Got endpoints: latency-svc-78gnx [2.940744463s] Aug 17 11:35:40.543: INFO: Created: latency-svc-8pbc6 Aug 17 11:35:40.558: INFO: Got endpoints: latency-svc-8pbc6 [2.938854831s] Aug 17 11:35:40.573: INFO: Created: latency-svc-c6bw4 Aug 17 11:35:40.673: INFO: Got endpoints: latency-svc-c6bw4 [2.916346472s] Aug 17 11:35:40.674: INFO: Created: latency-svc-vgsvc Aug 17 11:35:40.677: INFO: Got endpoints: latency-svc-vgsvc [2.579276212s] Aug 17 11:35:40.703: INFO: Created: latency-svc-jh8tp Aug 17 11:35:40.728: INFO: Got endpoints: latency-svc-jh8tp [2.158991788s] Aug 17 11:35:40.757: INFO: Created: latency-svc-d2bcg Aug 17 11:35:40.821: INFO: Got endpoints: latency-svc-d2bcg [1.814581644s] Aug 17 11:35:40.831: INFO: Created: latency-svc-w4mdh Aug 17 11:35:40.847: INFO: Got endpoints: latency-svc-w4mdh [1.827541737s] Aug 17 11:35:40.893: INFO: Created: latency-svc-78m5b Aug 17 11:35:40.914: INFO: Got endpoints: latency-svc-78m5b [1.642808806s] Aug 17 11:35:41.055: INFO: Created: latency-svc-xvqg4 Aug 17 11:35:41.058: INFO: Got endpoints: latency-svc-xvqg4 [1.630650837s] Aug 17 11:35:41.303: INFO: Created: latency-svc-gjc6v Aug 17 11:35:41.455: INFO: Got endpoints: latency-svc-gjc6v [1.840162958s] Aug 17 11:35:41.460: INFO: Created: latency-svc-r6mlj Aug 17 11:35:41.526: INFO: Got endpoints: latency-svc-r6mlj [1.869882798s] Aug 17 11:35:41.684: INFO: Created: latency-svc-nv42q Aug 17 11:35:41.729: INFO: Got endpoints: latency-svc-nv42q [1.974926342s] Aug 17 11:35:41.769: INFO: Created: latency-svc-lhbt2 Aug 17 11:35:41.882: INFO: Got endpoints: latency-svc-lhbt2 [1.869596302s] Aug 17 11:35:41.904: INFO: Created: latency-svc-ckgvx Aug 17 11:35:41.930: INFO: Got endpoints: latency-svc-ckgvx [1.575128845s] Aug 17 11:35:41.976: INFO: Created: latency-svc-pfrbx Aug 17 11:35:42.054: INFO: Got endpoints: latency-svc-pfrbx [1.647736171s] Aug 17 11:35:42.066: INFO: Created: latency-svc-wrvxn Aug 17 11:35:42.083: INFO: Got endpoints: latency-svc-wrvxn [1.590670573s] Aug 17 11:35:42.153: INFO: Created: latency-svc-7vlwn Aug 17 11:35:42.209: INFO: Got endpoints: latency-svc-7vlwn [1.650722056s] Aug 17 11:35:42.215: INFO: Created: latency-svc-5zk8b Aug 17 11:35:42.264: INFO: Got endpoints: latency-svc-5zk8b [1.591577351s] Aug 17 11:35:42.389: INFO: Created: latency-svc-4qrg4 Aug 17 11:35:42.409: INFO: Got endpoints: latency-svc-4qrg4 [1.731561903s] Aug 17 11:35:42.426: INFO: Created: latency-svc-fpvjb Aug 17 11:35:42.439: INFO: Got endpoints: latency-svc-fpvjb [1.711438249s] Aug 17 11:35:42.468: INFO: Created: latency-svc-tql47 Aug 17 11:35:42.571: INFO: Created: latency-svc-wkj25 Aug 17 11:35:42.571: INFO: Got endpoints: latency-svc-tql47 [1.7492339s] Aug 17 11:35:42.624: INFO: Got endpoints: latency-svc-wkj25 [1.777285447s] Aug 17 11:35:42.625: INFO: Created: latency-svc-ctsf6 Aug 17 11:35:42.637: INFO: Got endpoints: latency-svc-ctsf6 [1.723058382s] Aug 17 11:35:42.659: INFO: Created: latency-svc-g2xkn Aug 17 11:35:42.792: INFO: Got endpoints: latency-svc-g2xkn [1.733321883s] Aug 17 11:35:42.800: INFO: Created: latency-svc-g6nh4 Aug 17 11:35:42.805: INFO: Got endpoints: latency-svc-g6nh4 [1.350304729s] Aug 17 11:35:42.824: INFO: Created: latency-svc-4pt62 Aug 17 11:35:42.836: INFO: Got endpoints: latency-svc-4pt62 [1.310513044s] Aug 17 11:35:42.857: INFO: Created: latency-svc-vrc96 Aug 17 11:35:42.865: INFO: Got endpoints: latency-svc-vrc96 [1.136284351s] Aug 17 11:35:42.887: INFO: Created: latency-svc-q5src Aug 17 11:35:42.934: INFO: Got endpoints: latency-svc-q5src [1.051206372s] Aug 17 11:35:42.949: INFO: Created: latency-svc-s8vdh Aug 17 11:35:42.958: INFO: Got endpoints: latency-svc-s8vdh [1.027802981s] Aug 17 11:35:42.992: INFO: Created: latency-svc-t4bms Aug 17 11:35:43.018: INFO: Got endpoints: latency-svc-t4bms [963.936757ms] Aug 17 11:35:43.086: INFO: Created: latency-svc-qbj69 Aug 17 11:35:43.090: INFO: Got endpoints: latency-svc-qbj69 [1.006527287s] Aug 17 11:35:43.112: INFO: Created: latency-svc-dn6j6 Aug 17 11:35:43.136: INFO: Got endpoints: latency-svc-dn6j6 [927.187201ms] Aug 17 11:35:43.170: INFO: Created: latency-svc-vd2p5 Aug 17 11:35:43.235: INFO: Got endpoints: latency-svc-vd2p5 [970.151077ms] Aug 17 11:35:43.247: INFO: Created: latency-svc-jrs62 Aug 17 11:35:43.265: INFO: Got endpoints: latency-svc-jrs62 [856.015929ms] Aug 17 11:35:43.285: INFO: Created: latency-svc-9lpd7 Aug 17 11:35:43.299: INFO: Got endpoints: latency-svc-9lpd7 [859.652899ms] Aug 17 11:35:43.329: INFO: Created: latency-svc-pwwtb Aug 17 11:35:43.382: INFO: Got endpoints: latency-svc-pwwtb [810.669344ms] Aug 17 11:35:43.414: INFO: Created: latency-svc-smjth Aug 17 11:35:43.422: INFO: Got endpoints: latency-svc-smjth [797.379813ms] Aug 17 11:35:43.439: INFO: Created: latency-svc-s68gl Aug 17 11:35:43.452: INFO: Got endpoints: latency-svc-s68gl [814.488613ms] Aug 17 11:35:43.537: INFO: Created: latency-svc-bjzzw Aug 17 11:35:43.566: INFO: Got endpoints: latency-svc-bjzzw [773.926785ms] Aug 17 11:35:43.598: INFO: Created: latency-svc-rxkw6 Aug 17 11:35:43.696: INFO: Got endpoints: latency-svc-rxkw6 [891.072613ms] Aug 17 11:35:43.746: INFO: Created: latency-svc-bk7hx Aug 17 11:35:43.758: INFO: Got endpoints: latency-svc-bk7hx [921.554803ms] Aug 17 11:35:43.778: INFO: Created: latency-svc-gh8pz Aug 17 11:35:43.850: INFO: Got endpoints: latency-svc-gh8pz [984.331792ms] Aug 17 11:35:43.854: INFO: Created: latency-svc-sxhxj Aug 17 11:35:43.878: INFO: Got endpoints: latency-svc-sxhxj [944.604333ms] Aug 17 11:35:43.902: INFO: Created: latency-svc-5ktjr Aug 17 11:35:43.915: INFO: Got endpoints: latency-svc-5ktjr [957.082165ms] Aug 17 11:35:43.933: INFO: Created: latency-svc-hxbgj Aug 17 11:35:43.945: INFO: Got endpoints: latency-svc-hxbgj [926.14997ms] Aug 17 11:35:43.988: INFO: Created: latency-svc-bwv8t Aug 17 11:35:43.993: INFO: Got endpoints: latency-svc-bwv8t [902.779868ms] Aug 17 11:35:44.024: INFO: Created: latency-svc-ztzl5 Aug 17 11:35:44.036: INFO: Got endpoints: latency-svc-ztzl5 [899.31543ms] Aug 17 11:35:44.060: INFO: Created: latency-svc-pqzl6 Aug 17 11:35:44.078: INFO: Got endpoints: latency-svc-pqzl6 [842.763885ms] Aug 17 11:35:44.132: INFO: Created: latency-svc-qk8gz Aug 17 11:35:44.160: INFO: Created: latency-svc-84knm Aug 17 11:35:44.161: INFO: Got endpoints: latency-svc-qk8gz [895.318463ms] Aug 17 11:35:44.180: INFO: Got endpoints: latency-svc-84knm [880.295234ms] Aug 17 11:35:44.294: INFO: Created: latency-svc-2c49j Aug 17 11:35:44.298: INFO: Got endpoints: latency-svc-2c49j [916.067394ms] Aug 17 11:35:44.351: INFO: Created: latency-svc-4tzxs Aug 17 11:35:44.368: INFO: Got endpoints: latency-svc-4tzxs [945.696034ms] Aug 17 11:35:44.387: INFO: Created: latency-svc-qw6nt Aug 17 11:35:44.438: INFO: Got endpoints: latency-svc-qw6nt [986.343832ms] Aug 17 11:35:44.459: INFO: Created: latency-svc-2c22t Aug 17 11:35:44.495: INFO: Got endpoints: latency-svc-2c22t [928.537476ms] Aug 17 11:35:44.522: INFO: Created: latency-svc-pwhxz Aug 17 11:35:44.537: INFO: Got endpoints: latency-svc-pwhxz [840.109969ms] Aug 17 11:35:44.575: INFO: Created: latency-svc-n2sxv Aug 17 11:35:44.585: INFO: Got endpoints: latency-svc-n2sxv [826.438775ms] Aug 17 11:35:44.603: INFO: Created: latency-svc-v9pxq Aug 17 11:35:44.627: INFO: Got endpoints: latency-svc-v9pxq [776.929666ms] Aug 17 11:35:44.658: INFO: Created: latency-svc-k9xpv Aug 17 11:35:44.774: INFO: Got endpoints: latency-svc-k9xpv [895.720487ms] Aug 17 11:35:44.776: INFO: Created: latency-svc-4tq5p Aug 17 11:35:44.789: INFO: Got endpoints: latency-svc-4tq5p [873.743921ms] Aug 17 11:35:44.814: INFO: Created: latency-svc-kwf7h Aug 17 11:35:44.831: INFO: Got endpoints: latency-svc-kwf7h [886.506749ms] Aug 17 11:35:44.850: INFO: Created: latency-svc-48p65 Aug 17 11:35:44.924: INFO: Got endpoints: latency-svc-48p65 [930.363237ms] Aug 17 11:35:44.926: INFO: Created: latency-svc-lqqb4 Aug 17 11:35:44.933: INFO: Got endpoints: latency-svc-lqqb4 [897.384756ms] Aug 17 11:35:44.954: INFO: Created: latency-svc-lctpz Aug 17 11:35:44.966: INFO: Got endpoints: latency-svc-lctpz [887.841071ms] Aug 17 11:35:44.984: INFO: Created: latency-svc-5mwq2 Aug 17 11:35:45.001: INFO: Got endpoints: latency-svc-5mwq2 [840.676825ms] Aug 17 11:35:45.017: INFO: Created: latency-svc-t4nnz Aug 17 11:35:45.084: INFO: Got endpoints: latency-svc-t4nnz [904.366254ms] Aug 17 11:35:45.107: INFO: Created: latency-svc-nmjcm Aug 17 11:35:45.122: INFO: Got endpoints: latency-svc-nmjcm [824.083332ms] Aug 17 11:35:45.146: INFO: Created: latency-svc-8crsm Aug 17 11:35:45.185: INFO: Got endpoints: latency-svc-8crsm [817.591208ms] Aug 17 11:35:45.234: INFO: Created: latency-svc-p8snt Aug 17 11:35:45.239: INFO: Got endpoints: latency-svc-p8snt [799.7348ms] Aug 17 11:35:45.275: INFO: Created: latency-svc-8g9dc Aug 17 11:35:45.304: INFO: Got endpoints: latency-svc-8g9dc [809.233057ms] Aug 17 11:35:45.306: INFO: Latencies: [280.245042ms 436.513876ms 530.736957ms 622.418817ms 726.1967ms 772.933679ms 773.926785ms 776.929666ms 797.379813ms 799.7348ms 809.233057ms 810.669344ms 814.266722ms 814.488613ms 817.591208ms 824.083332ms 825.56829ms 826.438775ms 829.51215ms 834.267252ms 837.674102ms 840.109969ms 840.676825ms 842.763885ms 848.121812ms 849.76677ms 851.73601ms 856.015929ms 858.064082ms 859.292494ms 859.652899ms 861.436055ms 862.893894ms 863.167825ms 868.890933ms 873.511604ms 873.743921ms 873.827309ms 878.525472ms 879.42718ms 880.281233ms 880.295234ms 883.403799ms 884.067816ms 885.664689ms 886.506749ms 887.841071ms 888.724486ms 891.072613ms 892.938353ms 895.318463ms 895.720487ms 897.384756ms 899.31543ms 901.565ms 902.779868ms 902.793202ms 903.017437ms 904.061291ms 904.366254ms 911.244833ms 913.748448ms 915.772246ms 916.067394ms 921.554803ms 922.645949ms 926.14997ms 926.201339ms 927.000964ms 927.187201ms 928.1757ms 928.537476ms 929.273924ms 929.469215ms 930.363237ms 930.490895ms 936.747951ms 936.917484ms 940.808014ms 941.38026ms 941.772117ms 944.604333ms 945.37268ms 945.696034ms 946.372453ms 949.630988ms 950.060568ms 954.079712ms 956.979494ms 957.082165ms 963.936757ms 970.151077ms 970.25658ms 976.628971ms 977.440269ms 981.42829ms 984.331792ms 986.343832ms 995.97688ms 998.625529ms 998.648658ms 1.006527287s 1.017396892s 1.02453095s 1.027802981s 1.048295987s 1.05067565s 1.051206372s 1.083677051s 1.084226316s 1.097368267s 1.136284351s 1.144348696s 1.171614464s 1.2618309s 1.310513044s 1.347339039s 1.350304729s 1.382361887s 1.39274245s 1.39463017s 1.413679851s 1.434189761s 1.473250544s 1.475880531s 1.478203768s 1.493576032s 1.553534902s 1.575128845s 1.590670573s 1.591577351s 1.630265115s 1.630650837s 1.639604462s 1.642808806s 1.647736171s 1.649116601s 1.650722056s 1.65965932s 1.671464849s 1.677488075s 1.690971762s 1.700399347s 1.700724271s 1.702976592s 1.711438249s 1.718348191s 1.723058382s 1.72890251s 1.729841754s 1.731561903s 1.733321883s 1.739127856s 1.740844032s 1.7492339s 1.76080321s 1.767145282s 1.770520535s 1.777285447s 1.781221357s 1.783533671s 1.792333145s 1.793034543s 1.803706119s 1.814581644s 1.825807603s 1.827541737s 1.830457845s 1.840162958s 1.846343793s 1.869158115s 1.869596302s 1.869882798s 1.943240998s 1.961061169s 1.974926342s 1.995601544s 1.997401484s 2.005977973s 2.051927168s 2.066753357s 2.087544717s 2.100376183s 2.114023687s 2.158991788s 2.453503859s 2.474837363s 2.579276212s 2.678203096s 2.699732116s 2.70539586s 2.743608062s 2.854856189s 2.916346472s 2.927966782s 2.929491765s 2.938854831s 2.940744463s 2.950507403s 3.057721846s] Aug 17 11:35:45.307: INFO: 50 %ile: 998.648658ms Aug 17 11:35:45.307: INFO: 90 %ile: 2.066753357s Aug 17 11:35:45.308: INFO: 99 %ile: 2.950507403s Aug 17 11:35:45.308: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:35:45.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5790" for this suite. • [SLOW TEST:23.913 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":70,"skipped":1202,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:35:45.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9297/configmap-test-a7cc36cf-9a03-4b79-8947-fb76651d1735 STEP: Creating a pod to test consume configMaps Aug 17 11:35:45.611: INFO: Waiting up to 5m0s for pod "pod-configmaps-68404b2f-45d6-428c-bca4-7ed7fb70cb4b" in namespace "configmap-9297" to be "Succeeded or Failed" Aug 17 11:35:45.619: INFO: Pod "pod-configmaps-68404b2f-45d6-428c-bca4-7ed7fb70cb4b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.853401ms Aug 17 11:35:47.626: INFO: Pod "pod-configmaps-68404b2f-45d6-428c-bca4-7ed7fb70cb4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014618115s Aug 17 11:35:49.670: INFO: Pod "pod-configmaps-68404b2f-45d6-428c-bca4-7ed7fb70cb4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058708179s Aug 17 11:35:51.988: INFO: Pod "pod-configmaps-68404b2f-45d6-428c-bca4-7ed7fb70cb4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.376906886s STEP: Saw pod success Aug 17 11:35:51.989: INFO: Pod "pod-configmaps-68404b2f-45d6-428c-bca4-7ed7fb70cb4b" satisfied condition "Succeeded or Failed" Aug 17 11:35:52.068: INFO: Trying to get logs from node latest-worker pod pod-configmaps-68404b2f-45d6-428c-bca4-7ed7fb70cb4b container env-test: STEP: delete the pod Aug 17 11:35:52.859: INFO: Waiting for pod pod-configmaps-68404b2f-45d6-428c-bca4-7ed7fb70cb4b to disappear Aug 17 11:35:53.035: INFO: Pod pod-configmaps-68404b2f-45d6-428c-bca4-7ed7fb70cb4b no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:35:53.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9297" for this suite. • [SLOW TEST:7.730 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":71,"skipped":1212,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:35:53.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-96a2b1f7-c4ea-4205-bb0a-5faca84f4667 STEP: Creating a pod to test consume configMaps Aug 17 11:35:53.390: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbd095ca-9691-4bca-848a-73fe17b5d645" in namespace "projected-4779" to be "Succeeded or Failed" Aug 17 11:35:53.522: INFO: Pod "pod-projected-configmaps-bbd095ca-9691-4bca-848a-73fe17b5d645": Phase="Pending", Reason="", readiness=false. Elapsed: 131.638573ms Aug 17 11:35:55.847: INFO: Pod "pod-projected-configmaps-bbd095ca-9691-4bca-848a-73fe17b5d645": Phase="Pending", Reason="", readiness=false. Elapsed: 2.456431088s Aug 17 11:35:57.910: INFO: Pod "pod-projected-configmaps-bbd095ca-9691-4bca-848a-73fe17b5d645": Phase="Pending", Reason="", readiness=false. Elapsed: 4.519689564s Aug 17 11:36:00.059: INFO: Pod "pod-projected-configmaps-bbd095ca-9691-4bca-848a-73fe17b5d645": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.6680005s STEP: Saw pod success Aug 17 11:36:00.059: INFO: Pod "pod-projected-configmaps-bbd095ca-9691-4bca-848a-73fe17b5d645" satisfied condition "Succeeded or Failed" Aug 17 11:36:00.147: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-bbd095ca-9691-4bca-848a-73fe17b5d645 container projected-configmap-volume-test: STEP: delete the pod Aug 17 11:36:00.897: INFO: Waiting for pod pod-projected-configmaps-bbd095ca-9691-4bca-848a-73fe17b5d645 to disappear Aug 17 11:36:00.966: INFO: Pod pod-projected-configmaps-bbd095ca-9691-4bca-848a-73fe17b5d645 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:36:00.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4779" for this suite. • [SLOW TEST:7.829 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1221,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:36:00.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2130 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 17 11:36:01.986: INFO: Found 0 stateful pods, waiting for 3 Aug 17 11:36:12.224: INFO: Found 2 stateful pods, waiting for 3 Aug 17 11:36:22.046: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 11:36:22.046: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 11:36:22.046: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 17 11:36:22.217: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 17 11:36:32.377: INFO: Updating stateful set ss2 Aug 17 11:36:32.457: INFO: Waiting for Pod statefulset-2130/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 17 11:36:43.197: INFO: Found 2 stateful pods, waiting for 3 Aug 17 11:36:53.208: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 11:36:53.208: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 11:36:53.208: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 17 11:36:53.244: INFO: Updating stateful set ss2 Aug 17 11:36:53.315: INFO: Waiting for Pod statefulset-2130/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:37:03.415: INFO: Updating stateful set ss2 Aug 17 11:37:03.564: INFO: Waiting for StatefulSet statefulset-2130/ss2 to complete update Aug 17 11:37:03.565: INFO: Waiting for Pod statefulset-2130/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:37:13.580: INFO: Waiting for StatefulSet statefulset-2130/ss2 to complete update Aug 17 11:37:13.580: INFO: Waiting for Pod statefulset-2130/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 17 11:37:23.604: INFO: Waiting for StatefulSet statefulset-2130/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 17 11:37:33.578: INFO: Deleting all statefulset in ns statefulset-2130 Aug 17 11:37:33.582: INFO: Scaling statefulset ss2 to 0 Aug 17 11:38:03.685: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 11:38:03.692: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:38:04.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2130" for this suite. • [SLOW TEST:123.441 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":73,"skipped":1226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:38:04.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 17 11:38:07.165: INFO: Waiting up to 1m0s for all nodes to be ready Aug 17 11:39:07.236: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:39:07.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Aug 17 11:39:11.491: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:39:23.710: INFO: pods created so far: [1 1 1] Aug 17 11:39:23.710: INFO: length of pods created so far: 3 Aug 17 11:39:35.727: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:39:42.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1734" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:39:42.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6892" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:98.556 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":74,"skipped":1288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:39:42.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 11:39:43.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b5bc1bd-c6af-4ecc-941d-cd97166356d5" in namespace "projected-1385" to be "Succeeded or Failed" Aug 17 11:39:43.145: INFO: Pod "downwardapi-volume-3b5bc1bd-c6af-4ecc-941d-cd97166356d5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.574965ms Aug 17 11:39:45.153: INFO: Pod "downwardapi-volume-3b5bc1bd-c6af-4ecc-941d-cd97166356d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04337035s Aug 17 11:39:47.159: INFO: Pod "downwardapi-volume-3b5bc1bd-c6af-4ecc-941d-cd97166356d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049713142s Aug 17 11:39:49.198: INFO: Pod "downwardapi-volume-3b5bc1bd-c6af-4ecc-941d-cd97166356d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088122052s STEP: Saw pod success Aug 17 11:39:49.198: INFO: Pod "downwardapi-volume-3b5bc1bd-c6af-4ecc-941d-cd97166356d5" satisfied condition "Succeeded or Failed" Aug 17 11:39:49.329: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3b5bc1bd-c6af-4ecc-941d-cd97166356d5 container client-container: STEP: delete the pod Aug 17 11:39:49.751: INFO: Waiting for pod downwardapi-volume-3b5bc1bd-c6af-4ecc-941d-cd97166356d5 to disappear Aug 17 11:39:49.764: INFO: Pod downwardapi-volume-3b5bc1bd-c6af-4ecc-941d-cd97166356d5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:39:49.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1385" for this suite. • [SLOW TEST:6.876 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":75,"skipped":1335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:39:49.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 17 11:39:49.969: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Aug 17 11:39:53.882: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 17 11:39:57.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733261193, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733261193, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733261193, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733261193, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 11:39:59.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733261193, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733261193, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733261193, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733261193, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 11:40:03.471: INFO: Waited 1.472352367s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:40:05.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-489" for this suite. • [SLOW TEST:16.041 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":76,"skipped":1376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:40:05.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Aug 17 11:40:06.562: INFO: Waiting up to 5m0s for pod "pod-66265dbb-da37-40b3-979d-50eb83c0ec9a" in namespace "emptydir-680" to be "Succeeded or Failed" Aug 17 11:40:06.728: INFO: Pod "pod-66265dbb-da37-40b3-979d-50eb83c0ec9a": Phase="Pending", Reason="", readiness=false. Elapsed: 165.324627ms Aug 17 11:40:08.735: INFO: Pod "pod-66265dbb-da37-40b3-979d-50eb83c0ec9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172100607s Aug 17 11:40:10.741: INFO: Pod "pod-66265dbb-da37-40b3-979d-50eb83c0ec9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178732821s Aug 17 11:40:12.860: INFO: Pod "pod-66265dbb-da37-40b3-979d-50eb83c0ec9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.297941646s STEP: Saw pod success Aug 17 11:40:12.861: INFO: Pod "pod-66265dbb-da37-40b3-979d-50eb83c0ec9a" satisfied condition "Succeeded or Failed" Aug 17 11:40:12.866: INFO: Trying to get logs from node latest-worker2 pod pod-66265dbb-da37-40b3-979d-50eb83c0ec9a container test-container: STEP: delete the pod Aug 17 11:40:12.907: INFO: Waiting for pod pod-66265dbb-da37-40b3-979d-50eb83c0ec9a to disappear Aug 17 11:40:12.925: INFO: Pod pod-66265dbb-da37-40b3-979d-50eb83c0ec9a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:40:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-680" for this suite. • [SLOW TEST:7.019 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":77,"skipped":1401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:40:12.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-1ff41767-9410-406f-8e74-b3c0c07abbd1 STEP: Creating configMap with name cm-test-opt-upd-a8605930-637d-4b71-976f-39a656d976f8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1ff41767-9410-406f-8e74-b3c0c07abbd1 STEP: Updating configmap cm-test-opt-upd-a8605930-637d-4b71-976f-39a656d976f8 STEP: Creating configMap with name cm-test-opt-create-c27defe4-f8ae-4668-9f78-75a8b916020a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:41:26.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5055" for this suite. • [SLOW TEST:73.217 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":78,"skipped":1451,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:41:26.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 17 11:41:26.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7347' Aug 17 11:41:38.978: INFO: stderr: "" Aug 17 11:41:38.978: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 17 11:41:39.986: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:41:39.987: INFO: Found 0 / 1 Aug 17 11:41:41.228: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:41:41.228: INFO: Found 0 / 1 Aug 17 11:41:42.011: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:41:42.011: INFO: Found 0 / 1 Aug 17 11:41:42.986: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:41:42.986: INFO: Found 1 / 1 Aug 17 11:41:42.987: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 17 11:41:42.993: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:41:42.994: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 17 11:41:42.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config patch pod agnhost-primary-tjs9h --namespace=kubectl-7347 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 17 11:41:44.342: INFO: stderr: "" Aug 17 11:41:44.342: INFO: stdout: "pod/agnhost-primary-tjs9h patched\n" STEP: checking annotations Aug 17 11:41:44.351: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 11:41:44.351: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:41:44.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7347" for this suite. • [SLOW TEST:18.200 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":79,"skipped":1454,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:41:44.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 11:41:44.453: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b8a7f79-3fb4-4b8c-9c87-eb498bb55a61" in namespace "downward-api-2464" to be "Succeeded or Failed" Aug 17 11:41:44.465: INFO: Pod "downwardapi-volume-4b8a7f79-3fb4-4b8c-9c87-eb498bb55a61": Phase="Pending", Reason="", readiness=false. Elapsed: 11.669763ms Aug 17 11:41:46.473: INFO: Pod "downwardapi-volume-4b8a7f79-3fb4-4b8c-9c87-eb498bb55a61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01999752s Aug 17 11:41:48.481: INFO: Pod "downwardapi-volume-4b8a7f79-3fb4-4b8c-9c87-eb498bb55a61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027839658s STEP: Saw pod success Aug 17 11:41:48.481: INFO: Pod "downwardapi-volume-4b8a7f79-3fb4-4b8c-9c87-eb498bb55a61" satisfied condition "Succeeded or Failed" Aug 17 11:41:48.487: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4b8a7f79-3fb4-4b8c-9c87-eb498bb55a61 container client-container: STEP: delete the pod Aug 17 11:41:48.641: INFO: Waiting for pod downwardapi-volume-4b8a7f79-3fb4-4b8c-9c87-eb498bb55a61 to disappear Aug 17 11:41:48.654: INFO: Pod downwardapi-volume-4b8a7f79-3fb4-4b8c-9c87-eb498bb55a61 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:41:48.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2464" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:41:48.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:41:48.802: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 17 11:41:51.186: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:41:51.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9334" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":81,"skipped":1489,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:41:51.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Aug 17 11:41:51.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2783' Aug 17 11:41:54.795: INFO: stderr: "" Aug 17 11:41:54.795: INFO: stdout: "pod/pause created\n" Aug 17 11:41:54.796: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 17 11:41:54.797: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2783" to be "running and ready" Aug 17 11:41:54.849: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 52.208791ms Aug 17 11:41:56.875: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077889197s Aug 17 11:41:58.881: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084598548s Aug 17 11:42:00.887: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.090591459s Aug 17 11:42:00.888: INFO: Pod "pause" satisfied condition "running and ready" Aug 17 11:42:00.888: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Aug 17 11:42:00.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2783' Aug 17 11:42:02.533: INFO: stderr: "" Aug 17 11:42:02.533: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 17 11:42:02.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2783' Aug 17 11:42:05.797: INFO: stderr: "" Aug 17 11:42:05.797: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 17 11:42:05.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2783' Aug 17 11:42:07.155: INFO: stderr: "" Aug 17 11:42:07.155: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 17 11:42:07.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2783' Aug 17 11:42:08.712: INFO: stderr: "" Aug 17 11:42:08.712: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 14s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Aug 17 11:42:08.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2783' Aug 17 11:42:10.705: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 11:42:10.706: INFO: stdout: "pod \"pause\" force deleted\n" Aug 17 11:42:10.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2783' Aug 17 11:42:12.191: INFO: stderr: "No resources found in kubectl-2783 namespace.\n" Aug 17 11:42:12.191: INFO: stdout: "" Aug 17 11:42:12.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2783 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 17 11:42:13.663: INFO: stderr: "" Aug 17 11:42:13.664: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:42:13.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2783" for this suite. • [SLOW TEST:22.843 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":82,"skipped":1490,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:42:14.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:42:14.606: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 17 11:42:14.796: INFO: Number of nodes with available pods: 0 Aug 17 11:42:14.796: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 17 11:42:14.916: INFO: Number of nodes with available pods: 0 Aug 17 11:42:14.916: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:15.925: INFO: Number of nodes with available pods: 0 Aug 17 11:42:15.925: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:16.994: INFO: Number of nodes with available pods: 0 Aug 17 11:42:16.994: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:17.923: INFO: Number of nodes with available pods: 0 Aug 17 11:42:17.923: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:18.926: INFO: Number of nodes with available pods: 0 Aug 17 11:42:18.926: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:19.924: INFO: Number of nodes with available pods: 1 Aug 17 11:42:19.924: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 17 11:42:20.005: INFO: Number of nodes with available pods: 1 Aug 17 11:42:20.005: INFO: Number of running nodes: 0, number of available pods: 1 Aug 17 11:42:21.014: INFO: Number of nodes with available pods: 0 Aug 17 11:42:21.014: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 17 11:42:22.400: INFO: Number of nodes with available pods: 0 Aug 17 11:42:22.400: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:23.410: INFO: Number of nodes with available pods: 0 Aug 17 11:42:23.410: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:24.530: INFO: Number of nodes with available pods: 0 Aug 17 11:42:24.530: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:25.407: INFO: Number of nodes with available pods: 0 Aug 17 11:42:25.407: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:26.407: INFO: Number of nodes with available pods: 0 Aug 17 11:42:26.407: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:27.408: INFO: Number of nodes with available pods: 0 Aug 17 11:42:27.408: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:28.407: INFO: Number of nodes with available pods: 0 Aug 17 11:42:28.407: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:29.485: INFO: Number of nodes with available pods: 0 Aug 17 11:42:29.485: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:30.856: INFO: Number of nodes with available pods: 0 Aug 17 11:42:30.857: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:31.588: INFO: Number of nodes with available pods: 0 Aug 17 11:42:31.588: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:32.408: INFO: Number of nodes with available pods: 0 Aug 17 11:42:32.409: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:33.419: INFO: Number of nodes with available pods: 0 Aug 17 11:42:33.419: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:34.408: INFO: Number of nodes with available pods: 0 Aug 17 11:42:34.408: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:35.581: INFO: Number of nodes with available pods: 0 Aug 17 11:42:35.581: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:36.408: INFO: Number of nodes with available pods: 0 Aug 17 11:42:36.408: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:42:37.409: INFO: Number of nodes with available pods: 1 Aug 17 11:42:37.409: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8360, will wait for the garbage collector to delete the pods Aug 17 11:42:37.488: INFO: Deleting DaemonSet.extensions daemon-set took: 11.127256ms Aug 17 11:42:37.889: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.782173ms Aug 17 11:42:50.100: INFO: Number of nodes with available pods: 0 Aug 17 11:42:50.100: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 11:42:50.109: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8360/daemonsets","resourceVersion":"712780"},"items":null} Aug 17 11:42:50.117: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8360/pods","resourceVersion":"712780"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:42:50.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8360" for this suite. • [SLOW TEST:36.049 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":83,"skipped":1498,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:42:50.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 17 11:43:04.401: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:04.401: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:04.464041 10 log.go:181] (0x40033cc160) (0x40024bec80) Create stream I0817 11:43:04.464236 10 log.go:181] (0x40033cc160) (0x40024bec80) Stream added, broadcasting: 1 I0817 11:43:04.469788 10 log.go:181] (0x40033cc160) Reply frame received for 1 I0817 11:43:04.469942 10 log.go:181] (0x40033cc160) (0x4005dc4500) Create stream I0817 11:43:04.470005 10 log.go:181] (0x40033cc160) (0x4005dc4500) Stream added, broadcasting: 3 I0817 11:43:04.471189 10 log.go:181] (0x40033cc160) Reply frame received for 3 I0817 11:43:04.471327 10 log.go:181] (0x40033cc160) (0x4005dc45a0) Create stream I0817 11:43:04.471428 10 log.go:181] (0x40033cc160) (0x4005dc45a0) Stream added, broadcasting: 5 I0817 11:43:04.472947 10 log.go:181] (0x40033cc160) Reply frame received for 5 I0817 11:43:04.520556 10 log.go:181] (0x40033cc160) Data frame received for 5 I0817 11:43:04.520678 10 log.go:181] (0x4005dc45a0) (5) Data frame handling I0817 11:43:04.520917 10 log.go:181] (0x40033cc160) Data frame received for 3 I0817 11:43:04.521015 10 log.go:181] (0x4005dc4500) (3) Data frame handling I0817 11:43:04.521108 10 log.go:181] (0x4005dc4500) (3) Data frame sent I0817 11:43:04.521184 10 log.go:181] (0x40033cc160) Data frame received for 3 I0817 11:43:04.521254 10 log.go:181] (0x4005dc4500) (3) Data frame handling I0817 11:43:04.521828 10 log.go:181] (0x40033cc160) Data frame received for 1 I0817 11:43:04.521905 10 log.go:181] (0x40024bec80) (1) Data frame handling I0817 11:43:04.521981 10 log.go:181] (0x40024bec80) (1) Data frame sent I0817 11:43:04.522064 10 log.go:181] (0x40033cc160) (0x40024bec80) Stream removed, broadcasting: 1 I0817 11:43:04.522379 10 log.go:181] (0x40033cc160) (0x40024bec80) Stream removed, broadcasting: 1 I0817 11:43:04.522455 10 log.go:181] (0x40033cc160) (0x4005dc4500) Stream removed, broadcasting: 3 I0817 11:43:04.522526 10 log.go:181] (0x40033cc160) (0x4005dc45a0) Stream removed, broadcasting: 5 Aug 17 11:43:04.522: INFO: Exec stderr: "" Aug 17 11:43:04.523: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:04.523: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:04.531819 10 log.go:181] (0x40033cc160) Go away received I0817 11:43:04.582194 10 log.go:181] (0x4001e6c420) (0x4002b96aa0) Create stream I0817 11:43:04.582378 10 log.go:181] (0x4001e6c420) (0x4002b96aa0) Stream added, broadcasting: 1 I0817 11:43:04.586560 10 log.go:181] (0x4001e6c420) Reply frame received for 1 I0817 11:43:04.586747 10 log.go:181] (0x4001e6c420) (0x4002b96b40) Create stream I0817 11:43:04.586844 10 log.go:181] (0x4001e6c420) (0x4002b96b40) Stream added, broadcasting: 3 I0817 11:43:04.588304 10 log.go:181] (0x4001e6c420) Reply frame received for 3 I0817 11:43:04.588420 10 log.go:181] (0x4001e6c420) (0x40024bed20) Create stream I0817 11:43:04.588496 10 log.go:181] (0x4001e6c420) (0x40024bed20) Stream added, broadcasting: 5 I0817 11:43:04.590008 10 log.go:181] (0x4001e6c420) Reply frame received for 5 I0817 11:43:04.650310 10 log.go:181] (0x4001e6c420) Data frame received for 5 I0817 11:43:04.650506 10 log.go:181] (0x40024bed20) (5) Data frame handling I0817 11:43:04.650689 10 log.go:181] (0x4001e6c420) Data frame received for 3 I0817 11:43:04.650911 10 log.go:181] (0x4002b96b40) (3) Data frame handling I0817 11:43:04.651115 10 log.go:181] (0x4002b96b40) (3) Data frame sent I0817 11:43:04.651270 10 log.go:181] (0x4001e6c420) Data frame received for 3 I0817 11:43:04.651423 10 log.go:181] (0x4002b96b40) (3) Data frame handling I0817 11:43:04.651606 10 log.go:181] (0x4001e6c420) Data frame received for 1 I0817 11:43:04.651731 10 log.go:181] (0x4002b96aa0) (1) Data frame handling I0817 11:43:04.651868 10 log.go:181] (0x4002b96aa0) (1) Data frame sent I0817 11:43:04.651982 10 log.go:181] (0x4001e6c420) (0x4002b96aa0) Stream removed, broadcasting: 1 I0817 11:43:04.652119 10 log.go:181] (0x4001e6c420) Go away received I0817 11:43:04.652557 10 log.go:181] (0x4001e6c420) (0x4002b96aa0) Stream removed, broadcasting: 1 I0817 11:43:04.652708 10 log.go:181] (0x4001e6c420) (0x4002b96b40) Stream removed, broadcasting: 3 I0817 11:43:04.652935 10 log.go:181] (0x4001e6c420) (0x40024bed20) Stream removed, broadcasting: 5 Aug 17 11:43:04.653: INFO: Exec stderr: "" Aug 17 11:43:04.653: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:04.653: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:04.750107 10 log.go:181] (0x400015fc30) (0x4004234960) Create stream I0817 11:43:04.750311 10 log.go:181] (0x400015fc30) (0x4004234960) Stream added, broadcasting: 1 I0817 11:43:04.755782 10 log.go:181] (0x400015fc30) Reply frame received for 1 I0817 11:43:04.755962 10 log.go:181] (0x400015fc30) (0x4005dc4640) Create stream I0817 11:43:04.756034 10 log.go:181] (0x400015fc30) (0x4005dc4640) Stream added, broadcasting: 3 I0817 11:43:04.757896 10 log.go:181] (0x400015fc30) Reply frame received for 3 I0817 11:43:04.758107 10 log.go:181] (0x400015fc30) (0x4004234a00) Create stream I0817 11:43:04.758222 10 log.go:181] (0x400015fc30) (0x4004234a00) Stream added, broadcasting: 5 I0817 11:43:04.760022 10 log.go:181] (0x400015fc30) Reply frame received for 5 I0817 11:43:04.849366 10 log.go:181] (0x400015fc30) Data frame received for 5 I0817 11:43:04.849541 10 log.go:181] (0x4004234a00) (5) Data frame handling I0817 11:43:04.849763 10 log.go:181] (0x400015fc30) Data frame received for 3 I0817 11:43:04.849966 10 log.go:181] (0x4005dc4640) (3) Data frame handling I0817 11:43:04.850112 10 log.go:181] (0x4005dc4640) (3) Data frame sent I0817 11:43:04.850220 10 log.go:181] (0x400015fc30) Data frame received for 3 I0817 11:43:04.850334 10 log.go:181] (0x4005dc4640) (3) Data frame handling I0817 11:43:04.850513 10 log.go:181] (0x400015fc30) Data frame received for 1 I0817 11:43:04.850620 10 log.go:181] (0x4004234960) (1) Data frame handling I0817 11:43:04.850732 10 log.go:181] (0x4004234960) (1) Data frame sent I0817 11:43:04.850861 10 log.go:181] (0x400015fc30) (0x4004234960) Stream removed, broadcasting: 1 I0817 11:43:04.851021 10 log.go:181] (0x400015fc30) Go away received I0817 11:43:04.851379 10 log.go:181] (0x400015fc30) (0x4004234960) Stream removed, broadcasting: 1 I0817 11:43:04.851503 10 log.go:181] (0x400015fc30) (0x4005dc4640) Stream removed, broadcasting: 3 I0817 11:43:04.851601 10 log.go:181] (0x400015fc30) (0x4004234a00) Stream removed, broadcasting: 5 Aug 17 11:43:04.851: INFO: Exec stderr: "" Aug 17 11:43:04.851: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:04.851: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:04.907469 10 log.go:181] (0x4001e1e0b0) (0x400747e460) Create stream I0817 11:43:04.907683 10 log.go:181] (0x4001e1e0b0) (0x400747e460) Stream added, broadcasting: 1 I0817 11:43:04.911681 10 log.go:181] (0x4001e1e0b0) Reply frame received for 1 I0817 11:43:04.911820 10 log.go:181] (0x4001e1e0b0) (0x400747e500) Create stream I0817 11:43:04.911890 10 log.go:181] (0x4001e1e0b0) (0x400747e500) Stream added, broadcasting: 3 I0817 11:43:04.913275 10 log.go:181] (0x4001e1e0b0) Reply frame received for 3 I0817 11:43:04.913444 10 log.go:181] (0x4001e1e0b0) (0x400747e5a0) Create stream I0817 11:43:04.913543 10 log.go:181] (0x4001e1e0b0) (0x400747e5a0) Stream added, broadcasting: 5 I0817 11:43:04.914862 10 log.go:181] (0x4001e1e0b0) Reply frame received for 5 I0817 11:43:04.975849 10 log.go:181] (0x4001e1e0b0) Data frame received for 5 I0817 11:43:04.976012 10 log.go:181] (0x400747e5a0) (5) Data frame handling I0817 11:43:04.976148 10 log.go:181] (0x4001e1e0b0) Data frame received for 3 I0817 11:43:04.976273 10 log.go:181] (0x400747e500) (3) Data frame handling I0817 11:43:04.976437 10 log.go:181] (0x400747e500) (3) Data frame sent I0817 11:43:04.976553 10 log.go:181] (0x4001e1e0b0) Data frame received for 3 I0817 11:43:04.976646 10 log.go:181] (0x400747e500) (3) Data frame handling I0817 11:43:04.977630 10 log.go:181] (0x4001e1e0b0) Data frame received for 1 I0817 11:43:04.977727 10 log.go:181] (0x400747e460) (1) Data frame handling I0817 11:43:04.977812 10 log.go:181] (0x400747e460) (1) Data frame sent I0817 11:43:04.977890 10 log.go:181] (0x4001e1e0b0) (0x400747e460) Stream removed, broadcasting: 1 I0817 11:43:04.977986 10 log.go:181] (0x4001e1e0b0) Go away received I0817 11:43:04.978556 10 log.go:181] (0x4001e1e0b0) (0x400747e460) Stream removed, broadcasting: 1 I0817 11:43:04.978738 10 log.go:181] (0x4001e1e0b0) (0x400747e500) Stream removed, broadcasting: 3 I0817 11:43:04.978861 10 log.go:181] (0x4001e1e0b0) (0x400747e5a0) Stream removed, broadcasting: 5 Aug 17 11:43:04.978: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 17 11:43:04.979: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:04.979: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:05.255139 10 log.go:181] (0x4003974000) (0x4004234be0) Create stream I0817 11:43:05.255499 10 log.go:181] (0x4003974000) (0x4004234be0) Stream added, broadcasting: 1 I0817 11:43:05.262393 10 log.go:181] (0x4003974000) Reply frame received for 1 I0817 11:43:05.262616 10 log.go:181] (0x4003974000) (0x4002b96d20) Create stream I0817 11:43:05.262702 10 log.go:181] (0x4003974000) (0x4002b96d20) Stream added, broadcasting: 3 I0817 11:43:05.264053 10 log.go:181] (0x4003974000) Reply frame received for 3 I0817 11:43:05.264184 10 log.go:181] (0x4003974000) (0x4004234c80) Create stream I0817 11:43:05.264250 10 log.go:181] (0x4003974000) (0x4004234c80) Stream added, broadcasting: 5 I0817 11:43:05.265406 10 log.go:181] (0x4003974000) Reply frame received for 5 I0817 11:43:05.318747 10 log.go:181] (0x4003974000) Data frame received for 5 I0817 11:43:05.318875 10 log.go:181] (0x4004234c80) (5) Data frame handling I0817 11:43:05.319050 10 log.go:181] (0x4003974000) Data frame received for 3 I0817 11:43:05.319180 10 log.go:181] (0x4002b96d20) (3) Data frame handling I0817 11:43:05.319331 10 log.go:181] (0x4002b96d20) (3) Data frame sent I0817 11:43:05.319477 10 log.go:181] (0x4003974000) Data frame received for 3 I0817 11:43:05.319560 10 log.go:181] (0x4002b96d20) (3) Data frame handling I0817 11:43:05.319903 10 log.go:181] (0x4003974000) Data frame received for 1 I0817 11:43:05.320009 10 log.go:181] (0x4004234be0) (1) Data frame handling I0817 11:43:05.320140 10 log.go:181] (0x4004234be0) (1) Data frame sent I0817 11:43:05.320261 10 log.go:181] (0x4003974000) (0x4004234be0) Stream removed, broadcasting: 1 I0817 11:43:05.320360 10 log.go:181] (0x4003974000) Go away received I0817 11:43:05.320628 10 log.go:181] (0x4003974000) (0x4004234be0) Stream removed, broadcasting: 1 I0817 11:43:05.320707 10 log.go:181] (0x4003974000) (0x4002b96d20) Stream removed, broadcasting: 3 I0817 11:43:05.320860 10 log.go:181] (0x4003974000) (0x4004234c80) Stream removed, broadcasting: 5 Aug 17 11:43:05.320: INFO: Exec stderr: "" Aug 17 11:43:05.321: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:05.321: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:05.375892 10 log.go:181] (0x4001e1e6e0) (0x400747e820) Create stream I0817 11:43:05.376045 10 log.go:181] (0x4001e1e6e0) (0x400747e820) Stream added, broadcasting: 1 I0817 11:43:05.379712 10 log.go:181] (0x4001e1e6e0) Reply frame received for 1 I0817 11:43:05.379837 10 log.go:181] (0x4001e1e6e0) (0x400016e1e0) Create stream I0817 11:43:05.379905 10 log.go:181] (0x4001e1e6e0) (0x400016e1e0) Stream added, broadcasting: 3 I0817 11:43:05.381241 10 log.go:181] (0x4001e1e6e0) Reply frame received for 3 I0817 11:43:05.381376 10 log.go:181] (0x4001e1e6e0) (0x4005dc4780) Create stream I0817 11:43:05.381448 10 log.go:181] (0x4001e1e6e0) (0x4005dc4780) Stream added, broadcasting: 5 I0817 11:43:05.382758 10 log.go:181] (0x4001e1e6e0) Reply frame received for 5 I0817 11:43:05.434949 10 log.go:181] (0x4001e1e6e0) Data frame received for 5 I0817 11:43:05.435128 10 log.go:181] (0x4005dc4780) (5) Data frame handling I0817 11:43:05.435373 10 log.go:181] (0x4001e1e6e0) Data frame received for 3 I0817 11:43:05.435543 10 log.go:181] (0x400016e1e0) (3) Data frame handling I0817 11:43:05.435729 10 log.go:181] (0x400016e1e0) (3) Data frame sent I0817 11:43:05.435902 10 log.go:181] (0x4001e1e6e0) Data frame received for 3 I0817 11:43:05.436014 10 log.go:181] (0x400016e1e0) (3) Data frame handling I0817 11:43:05.436430 10 log.go:181] (0x4001e1e6e0) Data frame received for 1 I0817 11:43:05.436614 10 log.go:181] (0x400747e820) (1) Data frame handling I0817 11:43:05.436892 10 log.go:181] (0x400747e820) (1) Data frame sent I0817 11:43:05.437059 10 log.go:181] (0x4001e1e6e0) (0x400747e820) Stream removed, broadcasting: 1 I0817 11:43:05.437263 10 log.go:181] (0x4001e1e6e0) Go away received I0817 11:43:05.437662 10 log.go:181] (0x4001e1e6e0) (0x400747e820) Stream removed, broadcasting: 1 I0817 11:43:05.437810 10 log.go:181] (0x4001e1e6e0) (0x400016e1e0) Stream removed, broadcasting: 3 I0817 11:43:05.437900 10 log.go:181] (0x4001e1e6e0) (0x4005dc4780) Stream removed, broadcasting: 5 Aug 17 11:43:05.437: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 17 11:43:05.438: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:05.438: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:05.510138 10 log.go:181] (0x4001e6cd10) (0x4002b96f00) Create stream I0817 11:43:05.510285 10 log.go:181] (0x4001e6cd10) (0x4002b96f00) Stream added, broadcasting: 1 I0817 11:43:05.515829 10 log.go:181] (0x4001e6cd10) Reply frame received for 1 I0817 11:43:05.516170 10 log.go:181] (0x4001e6cd10) (0x4002b97040) Create stream I0817 11:43:05.516337 10 log.go:181] (0x4001e6cd10) (0x4002b97040) Stream added, broadcasting: 3 I0817 11:43:05.518240 10 log.go:181] (0x4001e6cd10) Reply frame received for 3 I0817 11:43:05.518396 10 log.go:181] (0x4001e6cd10) (0x4002b970e0) Create stream I0817 11:43:05.518482 10 log.go:181] (0x4001e6cd10) (0x4002b970e0) Stream added, broadcasting: 5 I0817 11:43:05.520171 10 log.go:181] (0x4001e6cd10) Reply frame received for 5 I0817 11:43:05.573407 10 log.go:181] (0x4001e6cd10) Data frame received for 3 I0817 11:43:05.573604 10 log.go:181] (0x4002b97040) (3) Data frame handling I0817 11:43:05.573739 10 log.go:181] (0x4002b97040) (3) Data frame sent I0817 11:43:05.573853 10 log.go:181] (0x4001e6cd10) Data frame received for 3 I0817 11:43:05.573960 10 log.go:181] (0x4002b97040) (3) Data frame handling I0817 11:43:05.574918 10 log.go:181] (0x4001e6cd10) Data frame received for 5 I0817 11:43:05.575009 10 log.go:181] (0x4002b970e0) (5) Data frame handling I0817 11:43:05.577396 10 log.go:181] (0x4001e6cd10) Data frame received for 1 I0817 11:43:05.577568 10 log.go:181] (0x4002b96f00) (1) Data frame handling I0817 11:43:05.577754 10 log.go:181] (0x4002b96f00) (1) Data frame sent I0817 11:43:05.577964 10 log.go:181] (0x4001e6cd10) (0x4002b96f00) Stream removed, broadcasting: 1 I0817 11:43:05.578177 10 log.go:181] (0x4001e6cd10) Go away received I0817 11:43:05.578390 10 log.go:181] (0x4001e6cd10) (0x4002b96f00) Stream removed, broadcasting: 1 I0817 11:43:05.578623 10 log.go:181] (0x4001e6cd10) (0x4002b97040) Stream removed, broadcasting: 3 I0817 11:43:05.578737 10 log.go:181] (0x4001e6cd10) (0x4002b970e0) Stream removed, broadcasting: 5 Aug 17 11:43:05.578: INFO: Exec stderr: "" Aug 17 11:43:05.578: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:05.579: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:05.631673 10 log.go:181] (0x4001e6d080) (0x4002b972c0) Create stream I0817 11:43:05.631817 10 log.go:181] (0x4001e6d080) (0x4002b972c0) Stream added, broadcasting: 1 I0817 11:43:05.636640 10 log.go:181] (0x4001e6d080) Reply frame received for 1 I0817 11:43:05.636890 10 log.go:181] (0x4001e6d080) (0x4002b97360) Create stream I0817 11:43:05.636978 10 log.go:181] (0x4001e6d080) (0x4002b97360) Stream added, broadcasting: 3 I0817 11:43:05.638407 10 log.go:181] (0x4001e6d080) Reply frame received for 3 I0817 11:43:05.638639 10 log.go:181] (0x4001e6d080) (0x4002b97400) Create stream I0817 11:43:05.638910 10 log.go:181] (0x4001e6d080) (0x4002b97400) Stream added, broadcasting: 5 I0817 11:43:05.641068 10 log.go:181] (0x4001e6d080) Reply frame received for 5 I0817 11:43:05.698063 10 log.go:181] (0x4001e6d080) Data frame received for 3 I0817 11:43:05.698308 10 log.go:181] (0x4002b97360) (3) Data frame handling I0817 11:43:05.698451 10 log.go:181] (0x4002b97360) (3) Data frame sent I0817 11:43:05.698549 10 log.go:181] (0x4001e6d080) Data frame received for 3 I0817 11:43:05.698633 10 log.go:181] (0x4002b97360) (3) Data frame handling I0817 11:43:05.698865 10 log.go:181] (0x4001e6d080) Data frame received for 5 I0817 11:43:05.699080 10 log.go:181] (0x4002b97400) (5) Data frame handling I0817 11:43:05.699225 10 log.go:181] (0x4001e6d080) Data frame received for 1 I0817 11:43:05.699323 10 log.go:181] (0x4002b972c0) (1) Data frame handling I0817 11:43:05.699431 10 log.go:181] (0x4002b972c0) (1) Data frame sent I0817 11:43:05.699538 10 log.go:181] (0x4001e6d080) (0x4002b972c0) Stream removed, broadcasting: 1 I0817 11:43:05.699656 10 log.go:181] (0x4001e6d080) Go away received I0817 11:43:05.700021 10 log.go:181] (0x4001e6d080) (0x4002b972c0) Stream removed, broadcasting: 1 I0817 11:43:05.700166 10 log.go:181] (0x4001e6d080) (0x4002b97360) Stream removed, broadcasting: 3 I0817 11:43:05.700283 10 log.go:181] (0x4001e6d080) (0x4002b97400) Stream removed, broadcasting: 5 Aug 17 11:43:05.700: INFO: Exec stderr: "" Aug 17 11:43:05.700: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:05.700: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:05.820308 10 log.go:181] (0x40010f8370) (0x4005dc4b40) Create stream I0817 11:43:05.820442 10 log.go:181] (0x40010f8370) (0x4005dc4b40) Stream added, broadcasting: 1 I0817 11:43:05.825732 10 log.go:181] (0x40010f8370) Reply frame received for 1 I0817 11:43:05.825861 10 log.go:181] (0x40010f8370) (0x400747e8c0) Create stream I0817 11:43:05.825923 10 log.go:181] (0x40010f8370) (0x400747e8c0) Stream added, broadcasting: 3 I0817 11:43:05.827202 10 log.go:181] (0x40010f8370) Reply frame received for 3 I0817 11:43:05.827311 10 log.go:181] (0x40010f8370) (0x4005dc4be0) Create stream I0817 11:43:05.827390 10 log.go:181] (0x40010f8370) (0x4005dc4be0) Stream added, broadcasting: 5 I0817 11:43:05.828619 10 log.go:181] (0x40010f8370) Reply frame received for 5 I0817 11:43:05.871788 10 log.go:181] (0x40010f8370) Data frame received for 3 I0817 11:43:05.871959 10 log.go:181] (0x400747e8c0) (3) Data frame handling I0817 11:43:05.872106 10 log.go:181] (0x40010f8370) Data frame received for 5 I0817 11:43:05.872263 10 log.go:181] (0x4005dc4be0) (5) Data frame handling I0817 11:43:05.872468 10 log.go:181] (0x400747e8c0) (3) Data frame sent I0817 11:43:05.872579 10 log.go:181] (0x40010f8370) Data frame received for 3 I0817 11:43:05.872677 10 log.go:181] (0x400747e8c0) (3) Data frame handling I0817 11:43:05.874003 10 log.go:181] (0x40010f8370) Data frame received for 1 I0817 11:43:05.874116 10 log.go:181] (0x4005dc4b40) (1) Data frame handling I0817 11:43:05.874234 10 log.go:181] (0x4005dc4b40) (1) Data frame sent I0817 11:43:05.874329 10 log.go:181] (0x40010f8370) (0x4005dc4b40) Stream removed, broadcasting: 1 I0817 11:43:05.874429 10 log.go:181] (0x40010f8370) Go away received I0817 11:43:05.874821 10 log.go:181] (0x40010f8370) (0x4005dc4b40) Stream removed, broadcasting: 1 I0817 11:43:05.874928 10 log.go:181] (0x40010f8370) (0x400747e8c0) Stream removed, broadcasting: 3 I0817 11:43:05.875019 10 log.go:181] (0x40010f8370) (0x4005dc4be0) Stream removed, broadcasting: 5 Aug 17 11:43:05.875: INFO: Exec stderr: "" Aug 17 11:43:05.875: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1797 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 11:43:05.875: INFO: >>> kubeConfig: /root/.kube/config I0817 11:43:05.934263 10 log.go:181] (0x4001e6d6b0) (0x4002b97720) Create stream I0817 11:43:05.934383 10 log.go:181] (0x4001e6d6b0) (0x4002b97720) Stream added, broadcasting: 1 I0817 11:43:05.938356 10 log.go:181] (0x4001e6d6b0) Reply frame received for 1 I0817 11:43:05.938484 10 log.go:181] (0x4001e6d6b0) (0x4002b97860) Create stream I0817 11:43:05.938560 10 log.go:181] (0x4001e6d6b0) (0x4002b97860) Stream added, broadcasting: 3 I0817 11:43:05.939899 10 log.go:181] (0x4001e6d6b0) Reply frame received for 3 I0817 11:43:05.940048 10 log.go:181] (0x4001e6d6b0) (0x4002b97900) Create stream I0817 11:43:05.940118 10 log.go:181] (0x4001e6d6b0) (0x4002b97900) Stream added, broadcasting: 5 I0817 11:43:05.941466 10 log.go:181] (0x4001e6d6b0) Reply frame received for 5 I0817 11:43:06.004505 10 log.go:181] (0x4001e6d6b0) Data frame received for 5 I0817 11:43:06.004674 10 log.go:181] (0x4002b97900) (5) Data frame handling I0817 11:43:06.004866 10 log.go:181] (0x4001e6d6b0) Data frame received for 3 I0817 11:43:06.004958 10 log.go:181] (0x4002b97860) (3) Data frame handling I0817 11:43:06.005089 10 log.go:181] (0x4002b97860) (3) Data frame sent I0817 11:43:06.005187 10 log.go:181] (0x4001e6d6b0) Data frame received for 3 I0817 11:43:06.005272 10 log.go:181] (0x4002b97860) (3) Data frame handling I0817 11:43:06.005583 10 log.go:181] (0x4001e6d6b0) Data frame received for 1 I0817 11:43:06.005678 10 log.go:181] (0x4002b97720) (1) Data frame handling I0817 11:43:06.005818 10 log.go:181] (0x4002b97720) (1) Data frame sent I0817 11:43:06.005921 10 log.go:181] (0x4001e6d6b0) (0x4002b97720) Stream removed, broadcasting: 1 I0817 11:43:06.006037 10 log.go:181] (0x4001e6d6b0) Go away received I0817 11:43:06.006308 10 log.go:181] (0x4001e6d6b0) (0x4002b97720) Stream removed, broadcasting: 1 I0817 11:43:06.006404 10 log.go:181] (0x4001e6d6b0) (0x4002b97860) Stream removed, broadcasting: 3 I0817 11:43:06.006484 10 log.go:181] (0x4001e6d6b0) (0x4002b97900) Stream removed, broadcasting: 5 Aug 17 11:43:06.006: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:43:06.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1797" for this suite. • [SLOW TEST:15.833 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":84,"skipped":1513,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:43:06.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 17 11:43:06.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1662' Aug 17 11:43:07.623: INFO: stderr: "" Aug 17 11:43:07.623: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Aug 17 11:43:07.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-1662' Aug 17 11:43:09.026: INFO: stderr: "" Aug 17 11:43:09.026: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-17T11:43:07Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-17T11:43:07Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-17T11:43:07Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1662\",\n \"resourceVersion\": \"712885\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1662/pods/e2e-test-httpd-pod\",\n \"uid\": \"590cda92-559c-46b2-934f-206df932cda9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-bc224\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-bc224\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-bc224\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-17T11:43:07Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-17T11:43:07Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-17T11:43:07Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-17T11:43:07Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.11\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-17T11:43:07Z\"\n }\n}\n" Aug 17 11:43:09.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-1662' Aug 17 11:43:12.176: INFO: stderr: "W0817 11:43:10.119731 973 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Aug 17 11:43:12.176: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Aug 17 11:43:12.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1662' Aug 17 11:43:20.033: INFO: stderr: "" Aug 17 11:43:20.033: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:43:20.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1662" for this suite. • [SLOW TEST:14.057 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":85,"skipped":1515,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:43:20.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 11:43:20.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1142b1b4-eb3d-4819-b039-aa142061f69e" in namespace "downward-api-7874" to be "Succeeded or Failed" Aug 17 11:43:20.209: INFO: Pod "downwardapi-volume-1142b1b4-eb3d-4819-b039-aa142061f69e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.435694ms Aug 17 11:43:22.959: INFO: Pod "downwardapi-volume-1142b1b4-eb3d-4819-b039-aa142061f69e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.763459288s Aug 17 11:43:25.168: INFO: Pod "downwardapi-volume-1142b1b4-eb3d-4819-b039-aa142061f69e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.97296602s Aug 17 11:43:27.177: INFO: Pod "downwardapi-volume-1142b1b4-eb3d-4819-b039-aa142061f69e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.981548973s STEP: Saw pod success Aug 17 11:43:27.177: INFO: Pod "downwardapi-volume-1142b1b4-eb3d-4819-b039-aa142061f69e" satisfied condition "Succeeded or Failed" Aug 17 11:43:27.183: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1142b1b4-eb3d-4819-b039-aa142061f69e container client-container: STEP: delete the pod Aug 17 11:43:27.875: INFO: Waiting for pod downwardapi-volume-1142b1b4-eb3d-4819-b039-aa142061f69e to disappear Aug 17 11:43:27.890: INFO: Pod downwardapi-volume-1142b1b4-eb3d-4819-b039-aa142061f69e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:43:27.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7874" for this suite. • [SLOW TEST:7.801 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":86,"skipped":1534,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:43:27.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-c661923d-b0be-4733-b0ba-408cae6388d1 STEP: Creating a pod to test consume configMaps Aug 17 11:43:28.745: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-15efcd31-dde0-46e7-8f77-339f4c48b3e1" in namespace "projected-5469" to be "Succeeded or Failed" Aug 17 11:43:28.822: INFO: Pod "pod-projected-configmaps-15efcd31-dde0-46e7-8f77-339f4c48b3e1": Phase="Pending", Reason="", readiness=false. Elapsed: 76.862106ms Aug 17 11:43:30.989: INFO: Pod "pod-projected-configmaps-15efcd31-dde0-46e7-8f77-339f4c48b3e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243730599s Aug 17 11:43:33.055: INFO: Pod "pod-projected-configmaps-15efcd31-dde0-46e7-8f77-339f4c48b3e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.310059738s STEP: Saw pod success Aug 17 11:43:33.056: INFO: Pod "pod-projected-configmaps-15efcd31-dde0-46e7-8f77-339f4c48b3e1" satisfied condition "Succeeded or Failed" Aug 17 11:43:33.180: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-15efcd31-dde0-46e7-8f77-339f4c48b3e1 container projected-configmap-volume-test: STEP: delete the pod Aug 17 11:43:33.397: INFO: Waiting for pod pod-projected-configmaps-15efcd31-dde0-46e7-8f77-339f4c48b3e1 to disappear Aug 17 11:43:33.412: INFO: Pod pod-projected-configmaps-15efcd31-dde0-46e7-8f77-339f4c48b3e1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:43:33.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5469" for this suite. • [SLOW TEST:5.623 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":87,"skipped":1549,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:43:33.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-da7aab30-0f1f-4b72-a47f-829026f5cb3c in namespace container-probe-8972 Aug 17 11:43:39.789: INFO: Started pod busybox-da7aab30-0f1f-4b72-a47f-829026f5cb3c in namespace container-probe-8972 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 11:43:39.953: INFO: Initial restart count of pod busybox-da7aab30-0f1f-4b72-a47f-829026f5cb3c is 0 Aug 17 11:44:34.642: INFO: Restart count of pod container-probe-8972/busybox-da7aab30-0f1f-4b72-a47f-829026f5cb3c is now 1 (54.689203153s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:44:34.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8972" for this suite. • [SLOW TEST:61.360 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":88,"skipped":1558,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:44:34.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Aug 17 11:44:47.129: INFO: Pod pod-hostip-07a43edf-f20e-4026-abef-110dbf3a5794 has hostIP: 172.18.0.11 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:44:47.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9013" for this suite. • [SLOW TEST:12.250 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":89,"skipped":1577,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:44:47.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 11:44:47.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a612f98-72bf-4409-a54e-2b1374fe0d17" in namespace "projected-6786" to be "Succeeded or Failed" Aug 17 11:44:47.893: INFO: Pod "downwardapi-volume-2a612f98-72bf-4409-a54e-2b1374fe0d17": Phase="Pending", Reason="", readiness=false. Elapsed: 27.813264ms Aug 17 11:44:50.045: INFO: Pod "downwardapi-volume-2a612f98-72bf-4409-a54e-2b1374fe0d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179627538s Aug 17 11:44:52.109: INFO: Pod "downwardapi-volume-2a612f98-72bf-4409-a54e-2b1374fe0d17": Phase="Running", Reason="", readiness=true. Elapsed: 4.243474659s Aug 17 11:44:54.116: INFO: Pod "downwardapi-volume-2a612f98-72bf-4409-a54e-2b1374fe0d17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.250597432s STEP: Saw pod success Aug 17 11:44:54.116: INFO: Pod "downwardapi-volume-2a612f98-72bf-4409-a54e-2b1374fe0d17" satisfied condition "Succeeded or Failed" Aug 17 11:44:54.121: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2a612f98-72bf-4409-a54e-2b1374fe0d17 container client-container: STEP: delete the pod Aug 17 11:44:54.290: INFO: Waiting for pod downwardapi-volume-2a612f98-72bf-4409-a54e-2b1374fe0d17 to disappear Aug 17 11:44:54.333: INFO: Pod downwardapi-volume-2a612f98-72bf-4409-a54e-2b1374fe0d17 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:44:54.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6786" for this suite. • [SLOW TEST:7.787 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":90,"skipped":1591,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:44:54.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-46b5628b-f7fa-49c1-978b-18bbc55fcc7d STEP: Creating configMap with name cm-test-opt-upd-8ad13522-8728-4479-ba15-e02211d30b7b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-46b5628b-f7fa-49c1-978b-18bbc55fcc7d STEP: Updating configmap cm-test-opt-upd-8ad13522-8728-4479-ba15-e02211d30b7b STEP: Creating configMap with name cm-test-opt-create-6c74a7cb-5e2b-407d-9794-17fbedb32a59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:46:32.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4518" for this suite. • [SLOW TEST:97.207 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":91,"skipped":1607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:46:32.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:46:36.901: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:46:41.942: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:46:44.039: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:46:45.625: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:46:48.914: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:46:50.945: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:46:52.985: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:46:55.094: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:46:57.741: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:46:59.064: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:00.906: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:02.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:05.770: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:07.219: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:11.490: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:12.949: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:14.908: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:18.447: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:22.678: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:23.697: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:25.533: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:28.496: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:30.075: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:31.554: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:36.132: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:40.208: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:41.848: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:42.908: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:44.908: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:46.921: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:48.908: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:51.093: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:52.938: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:56.059: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:58.124: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:47:59.124: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:01.369: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:04.072: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:05.240: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:06.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:08.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:10.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:12.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:14.950: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:16.951: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:18.909: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:22.614: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:23.190: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:25.821: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:29.198: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:31.119: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:32.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:36.555: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:38.472: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:39.658: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:42.576: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:43.655: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:44.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:48.707: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:49.087: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:52.347: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:53.326: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:55.076: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:48:56.986: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:00.508: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:01.625: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:03.365: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:04.992: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:08.316: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:10.276: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:11.587: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:13.355: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:15.044: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:17.368: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:18.971: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:20.920: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:22.908: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:24.935: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:26.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:28.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:30.908: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:34.071: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:36.424: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:37.335: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:39.409: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:41.774: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:42.988: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:44.908: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:47.605: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:51.252: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:52.906: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:55.532: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:57.099: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:49:59.696: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:01.044: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:06.149: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:09.029: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:12.301: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:14.063: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:15.006: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:18.204: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:18.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:21.083: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:22.976: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:24.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:27.814: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:29.756: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:31.108: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:34.014: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:35.362: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:36.906: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:39.323: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:40.933: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:42.958: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:45.738: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:47.291: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:48.913: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:52.082: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:52.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:56.098: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:50:59.342: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:01.750: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:03.612: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:05.187: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:06.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:08.906: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:10.948: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:12.977: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:16.618: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:17.443: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:19.151: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:20.907: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:23.636: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:25.694: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:29.020: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:31.464: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:33.423: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:35.291: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:37.102: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:39.271: INFO: The status of Pod test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 is Pending, waiting for it to be Running (with Ready = true) Aug 17 11:51:39.362: FAIL: Unexpected error: <*errors.errorString | 0x40035a4d60>: { s: "want pod 'test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30' on 'latest-worker2' to be 'Running' but was 'Pending'", } want pod 'test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30' on 'latest-worker2' to be 'Running' but was 'Pending' occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func3.2() /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:71 +0x1a4 k8s.io/kubernetes/test/e2e.RunE2ETests(0x4002409500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x320 k8s.io/kubernetes/test/e2e.TestE2E(0x4002409500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x28 testing.tRunner(0x4002409500, 0x44e5dc0) /usr/local/go/src/testing/testing.go:1108 +0xdc created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1159 +0x2ec [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "container-probe-4749". STEP: Found 4 events. Aug 17 11:51:43.583: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30: { } Scheduled: Successfully assigned container-probe-4749/test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 to latest-worker2 Aug 17 11:51:43.583: INFO: At 2020-08-17 11:48:02 +0000 UTC - event for test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.20" already present on machine Aug 17 11:51:43.583: INFO: At 2020-08-17 11:50:02 +0000 UTC - event for test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30: {kubelet latest-worker2} Failed: Error: context deadline exceeded Aug 17 11:51:43.583: INFO: At 2020-08-17 11:50:03 +0000 UTC - event for test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30: {kubelet latest-worker2} Failed: Error: failed to reserve container name "test-webserver_test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30_container-probe-4749_06c13015-b6d7-4f3f-a9bd-adfe564e8d98_0": name "test-webserver_test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30_container-probe-4749_06c13015-b6d7-4f3f-a9bd-adfe564e8d98_0" is reserved for "545ae3f3d48399cae356592a4fe3eae7c1e8b8ed4ad2f2d85619425750d557fd" Aug 17 11:51:46.519: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 11:51:46.520: INFO: test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 latest-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 11:46:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 11:46:37 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 11:46:37 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 11:46:34 +0000 UTC }] Aug 17 11:51:46.520: INFO: Aug 17 11:51:48.097: INFO: Logging node info for node latest-control-plane Aug 17 11:51:48.427: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane e5265ef7-4fee-44e7-b227-c9d0aff11127 713989 0 2020-08-15 09:42:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-08-15 09:42:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-08-15 09:42:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-08-17 11:49:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-17 11:49:36 +0000 UTC,LastTransitionTime:2020-08-15 09:41:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-17 11:49:36 +0000 UTC,LastTransitionTime:2020-08-15 09:41:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-17 11:49:36 +0000 UTC,LastTransitionTime:2020-08-15 09:41:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-17 11:49:36 +0000 UTC,LastTransitionTime:2020-08-15 09:42:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.12,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:355da13825784523b4a253c23edd1334,SystemUUID:8f367e0f-042b-45ff-9966-5ca6bcc1cc56,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.19.0-rc.1,KubeProxyVersion:v1.19.0-rc.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.7-0],SizeBytes:299470271,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.19.0-rc.1],SizeBytes:137937533,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.19.0-rc.1],SizeBytes:101224746,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.19.0-rc.1],SizeBytes:87920444,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.19.0-rc.1],SizeBytes:67843882,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 17 11:51:48.436: INFO: Logging kubelet events for node latest-control-plane Aug 17 11:51:48.500: INFO: Logging pods the kubelet thinks is on node latest-control-plane Aug 17 11:51:48.545: INFO: kube-apiserver-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.545: INFO: Container kube-apiserver ready: true, restart count 0 Aug 17 11:51:48.545: INFO: kube-scheduler-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.545: INFO: Container kube-scheduler ready: true, restart count 4 Aug 17 11:51:48.545: INFO: kindnet-qmj2d started at 2020-08-15 09:42:20 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.545: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 11:51:48.545: INFO: coredns-f9fd979d6-f7hdg started at 2020-08-15 09:42:39 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.545: INFO: Container coredns ready: true, restart count 0 Aug 17 11:51:48.545: INFO: coredns-f9fd979d6-vxzgb started at 2020-08-15 09:42:40 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.545: INFO: Container coredns ready: true, restart count 0 Aug 17 11:51:48.545: INFO: etcd-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.545: INFO: Container etcd ready: true, restart count 0 Aug 17 11:51:48.545: INFO: kube-controller-manager-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.545: INFO: Container kube-controller-manager ready: true, restart count 8 Aug 17 11:51:48.546: INFO: kube-proxy-8zfjc started at 2020-08-15 09:42:20 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.546: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 11:51:48.546: INFO: local-path-provisioner-8b46957d4-csnr8 started at 2020-08-15 09:42:41 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.546: INFO: Container local-path-provisioner ready: true, restart count 0 W0817 11:51:48.611782 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 11:51:48.717: INFO: Latency metrics for node latest-control-plane Aug 17 11:51:48.717: INFO: Logging node info for node latest-worker Aug 17 11:51:48.722: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 004fc98a-1b9f-43ac-98e7-5d7f7d4d062a 713878 0 2020-08-15 09:42:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2020-08-15 09:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubeadm Update v1 2020-08-15 09:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2020-08-17 11:42:19 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2020-08-17 11:48:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-17 11:48:50 +0000 UTC,LastTransitionTime:2020-08-15 09:42:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-17 11:48:50 +0000 UTC,LastTransitionTime:2020-08-15 09:42:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-17 11:48:50 +0000 UTC,LastTransitionTime:2020-08-15 09:42:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-17 11:48:50 +0000 UTC,LastTransitionTime:2020-08-15 09:43:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.11,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4962fc9ace3b4cf98891488fcb5c4ee6,SystemUUID:b6eda539-1b1b-4e57-b392-83081398c987,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.19.0-rc.1,KubeProxyVersion:v1.19.0-rc.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.7-0],SizeBytes:299470271,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.19.0-rc.1],SizeBytes:137937533,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:4ba7f14019eaf22c4aa0095ebbce463fcbf2e2074f6dae826634ec7ce7a876e9 docker.io/aquasec/kube-hunter:latest],SizeBytes:117083310,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.19.0-rc.1],SizeBytes:101224746,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.19.0-rc.1],SizeBytes:87920444,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:735f090b15d5efc576da1602d8c678bf39a7605c0718ed915daec8f2297db2ff k8s.gcr.io/etcd:3.4.9],SizeBytes:86734312,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.19.0-rc.1],SizeBytes:67843882,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d7dc3a4976d3bae4597677cbe5f9105877f4287771e555cd9b5c0fbca6105db6 docker.io/aquasec/kube-bench:latest],SizeBytes:8030821,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 17 11:51:48.726: INFO: Logging kubelet events for node latest-worker Aug 17 11:51:48.729: INFO: Logging pods the kubelet thinks is on node latest-worker Aug 17 11:51:48.753: INFO: kube-proxy-82wrf started at 2020-08-15 09:42:30 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.753: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 11:51:48.753: INFO: kindnet-gmpqb started at 2020-08-15 09:42:30 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.754: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 11:51:48.754: INFO: pod-configmaps-41c85f5d-edc7-4b65-bcbf-ee2b30aef29f started at 2020-08-17 11:44:58 +0000 UTC (0+3 container statuses recorded) Aug 17 11:51:48.754: INFO: Container createcm-volume-test ready: true, restart count 0 Aug 17 11:51:48.754: INFO: Container delcm-volume-test ready: true, restart count 0 Aug 17 11:51:48.754: INFO: Container updcm-volume-test ready: true, restart count 0 W0817 11:51:48.767837 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 11:51:48.875: INFO: Latency metrics for node latest-worker Aug 17 11:51:48.875: INFO: Logging node info for node latest-worker2 Aug 17 11:51:48.881: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 0e8bca53-43cd-4827-990c-d232e1852e08 714011 0 2020-08-15 09:42:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2020-08-15 09:42:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubeadm Update v1 2020-08-15 09:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kubelet Update v1 2020-08-17 11:49:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-17 11:49:47 +0000 UTC,LastTransitionTime:2020-08-15 09:42:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-17 11:49:47 +0000 UTC,LastTransitionTime:2020-08-15 09:42:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-17 11:49:47 +0000 UTC,LastTransitionTime:2020-08-15 09:42:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-17 11:49:47 +0000 UTC,LastTransitionTime:2020-08-15 09:42:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c01f9d6dc3c84901a8eec574df183c82,SystemUUID:9c567046-ce77-43e5-9100-5388d15772fe,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.19.0-rc.1,KubeProxyVersion:v1.19.0-rc.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7 k8s.gcr.io/etcd:3.4.7-0],SizeBytes:299470271,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.19.0-rc.1],SizeBytes:137937533,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:4ba7f14019eaf22c4aa0095ebbce463fcbf2e2074f6dae826634ec7ce7a876e9 docker.io/aquasec/kube-hunter:latest],SizeBytes:117083310,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.19.0-rc.1],SizeBytes:101224746,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.19.0-rc.1],SizeBytes:87920444,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:735f090b15d5efc576da1602d8c678bf39a7605c0718ed915daec8f2297db2ff k8s.gcr.io/etcd:3.4.9],SizeBytes:86734312,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.19.0-rc.1],SizeBytes:67843882,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d7dc3a4976d3bae4597677cbe5f9105877f4287771e555cd9b5c0fbca6105db6 docker.io/aquasec/kube-bench:latest],SizeBytes:8030821,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 17 11:51:48.886: INFO: Logging kubelet events for node latest-worker2 Aug 17 11:51:48.890: INFO: Logging pods the kubelet thinks is on node latest-worker2 Aug 17 11:51:48.913: INFO: test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30 started at 2020-08-17 11:46:37 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.913: INFO: Container test-webserver ready: false, restart count 0 Aug 17 11:51:48.913: INFO: kube-proxy-fjk8r started at 2020-08-15 09:42:29 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.913: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 11:51:48.913: INFO: kindnet-grzzh started at 2020-08-15 09:42:30 +0000 UTC (0+1 container statuses recorded) Aug 17 11:51:48.913: INFO: Container kindnet-cni ready: true, restart count 0 W0817 11:51:48.926477 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 11:51:49.017: INFO: Latency metrics for node latest-worker2 Aug 17 11:51:49.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4749" for this suite. • Failure [316.887 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] [It] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:51:39.362: Unexpected error: <*errors.errorString | 0x40035a4d60>: { s: "want pod 'test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30' on 'latest-worker2' to be 'Running' but was 'Pending'", } want pod 'test-webserver-c89657cb-cc72-4739-a3b7-cf5e2c64de30' on 'latest-worker2' to be 'Running' but was 'Pending' occurred /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:71 ------------------------------ {"msg":"FAILED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":91,"skipped":1639,"failed":1,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]"]} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:51:49.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 17 11:51:51.348: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:51:51.762: INFO: Number of nodes with available pods: 0 Aug 17 11:51:51.762: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:51:52.854: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:51:53.615: INFO: Number of nodes with available pods: 0 Aug 17 11:51:53.615: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:51:54.872: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:51:55.217: INFO: Number of nodes with available pods: 0 Aug 17 11:51:55.217: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:51:56.391: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:51:56.401: INFO: Number of nodes with available pods: 0 Aug 17 11:51:56.401: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:51:57.220: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:51:59.098: INFO: Number of nodes with available pods: 0 Aug 17 11:51:59.098: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:00.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:00.463: INFO: Number of nodes with available pods: 0 Aug 17 11:52:00.463: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:01.769: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:03.327: INFO: Number of nodes with available pods: 0 Aug 17 11:52:03.327: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:03.847: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:04.080: INFO: Number of nodes with available pods: 0 Aug 17 11:52:04.081: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:05.784: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:05.790: INFO: Number of nodes with available pods: 0 Aug 17 11:52:05.790: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:07.190: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:07.667: INFO: Number of nodes with available pods: 0 Aug 17 11:52:07.667: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:09.470: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:10.179: INFO: Number of nodes with available pods: 0 Aug 17 11:52:10.179: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:10.930: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:11.439: INFO: Number of nodes with available pods: 0 Aug 17 11:52:11.440: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:12.344: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:12.347: INFO: Number of nodes with available pods: 0 Aug 17 11:52:12.347: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:12.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:12.780: INFO: Number of nodes with available pods: 0 Aug 17 11:52:12.780: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:13.977: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:14.020: INFO: Number of nodes with available pods: 0 Aug 17 11:52:14.021: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:14.769: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:14.775: INFO: Number of nodes with available pods: 0 Aug 17 11:52:14.775: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:15.783: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:17.509: INFO: Number of nodes with available pods: 0 Aug 17 11:52:17.509: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:17.889: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:17.927: INFO: Number of nodes with available pods: 0 Aug 17 11:52:17.927: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:18.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:18.780: INFO: Number of nodes with available pods: 0 Aug 17 11:52:18.780: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:19.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:19.834: INFO: Number of nodes with available pods: 0 Aug 17 11:52:19.834: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:20.922: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:20.958: INFO: Number of nodes with available pods: 0 Aug 17 11:52:20.958: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:21.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:21.796: INFO: Number of nodes with available pods: 0 Aug 17 11:52:21.796: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:22.795: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:22.901: INFO: Number of nodes with available pods: 0 Aug 17 11:52:22.901: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:23.770: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:23.775: INFO: Number of nodes with available pods: 0 Aug 17 11:52:23.775: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:24.776: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:24.782: INFO: Number of nodes with available pods: 0 Aug 17 11:52:24.782: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:29.660: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:29.664: INFO: Number of nodes with available pods: 0 Aug 17 11:52:29.664: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:30.433: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:30.758: INFO: Number of nodes with available pods: 0 Aug 17 11:52:30.758: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:31.150: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:31.173: INFO: Number of nodes with available pods: 0 Aug 17 11:52:31.173: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:32.905: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:33.398: INFO: Number of nodes with available pods: 0 Aug 17 11:52:33.398: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:33.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:33.775: INFO: Number of nodes with available pods: 0 Aug 17 11:52:33.775: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:34.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:34.776: INFO: Number of nodes with available pods: 0 Aug 17 11:52:34.776: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:36.373: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:36.450: INFO: Number of nodes with available pods: 0 Aug 17 11:52:36.450: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:36.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:36.778: INFO: Number of nodes with available pods: 0 Aug 17 11:52:36.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:38.482: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:38.705: INFO: Number of nodes with available pods: 0 Aug 17 11:52:38.705: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:39.011: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:39.478: INFO: Number of nodes with available pods: 0 Aug 17 11:52:39.478: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:40.122: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:40.128: INFO: Number of nodes with available pods: 0 Aug 17 11:52:40.128: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:41.401: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:41.959: INFO: Number of nodes with available pods: 0 Aug 17 11:52:41.959: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:43.052: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:43.057: INFO: Number of nodes with available pods: 0 Aug 17 11:52:43.057: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:43.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:43.778: INFO: Number of nodes with available pods: 0 Aug 17 11:52:43.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:46.383: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:47.255: INFO: Number of nodes with available pods: 0 Aug 17 11:52:47.255: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:48.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:48.084: INFO: Number of nodes with available pods: 0 Aug 17 11:52:48.084: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:50.586: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:50.599: INFO: Number of nodes with available pods: 0 Aug 17 11:52:50.599: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:51.179: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:51.452: INFO: Number of nodes with available pods: 0 Aug 17 11:52:51.453: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:52.006: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:52.011: INFO: Number of nodes with available pods: 0 Aug 17 11:52:52.011: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:53.145: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:53.734: INFO: Number of nodes with available pods: 0 Aug 17 11:52:53.734: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:54.180: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:54.500: INFO: Number of nodes with available pods: 0 Aug 17 11:52:54.500: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:54.821: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:54.826: INFO: Number of nodes with available pods: 0 Aug 17 11:52:54.826: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:56.195: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:56.471: INFO: Number of nodes with available pods: 0 Aug 17 11:52:56.471: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:56.928: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:57.762: INFO: Number of nodes with available pods: 0 Aug 17 11:52:57.762: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:57.850: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:57.855: INFO: Number of nodes with available pods: 0 Aug 17 11:52:57.855: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:52:58.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:52:58.785: INFO: Number of nodes with available pods: 0 Aug 17 11:52:58.785: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:01.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:02.425: INFO: Number of nodes with available pods: 0 Aug 17 11:53:02.426: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:02.816: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:02.823: INFO: Number of nodes with available pods: 0 Aug 17 11:53:02.823: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:03.880: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:03.885: INFO: Number of nodes with available pods: 0 Aug 17 11:53:03.885: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:04.849: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:05.406: INFO: Number of nodes with available pods: 0 Aug 17 11:53:05.407: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:05.887: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:05.893: INFO: Number of nodes with available pods: 0 Aug 17 11:53:05.893: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:08.421: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:09.511: INFO: Number of nodes with available pods: 0 Aug 17 11:53:09.511: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:09.981: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:10.225: INFO: Number of nodes with available pods: 0 Aug 17 11:53:10.225: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:11.411: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:11.416: INFO: Number of nodes with available pods: 0 Aug 17 11:53:11.416: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:12.095: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:12.100: INFO: Number of nodes with available pods: 0 Aug 17 11:53:12.100: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:13.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:13.658: INFO: Number of nodes with available pods: 0 Aug 17 11:53:13.658: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:13.912: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:13.918: INFO: Number of nodes with available pods: 0 Aug 17 11:53:13.918: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:14.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:14.833: INFO: Number of nodes with available pods: 0 Aug 17 11:53:14.833: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:16.205: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:16.539: INFO: Number of nodes with available pods: 0 Aug 17 11:53:16.539: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:16.783: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:17.058: INFO: Number of nodes with available pods: 0 Aug 17 11:53:17.058: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:17.884: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:17.889: INFO: Number of nodes with available pods: 0 Aug 17 11:53:17.889: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:18.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:18.776: INFO: Number of nodes with available pods: 0 Aug 17 11:53:18.776: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:19.873: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:20.319: INFO: Number of nodes with available pods: 0 Aug 17 11:53:20.319: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:20.999: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:21.388: INFO: Number of nodes with available pods: 0 Aug 17 11:53:21.388: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:22.664: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:22.907: INFO: Number of nodes with available pods: 0 Aug 17 11:53:22.907: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:25.621: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:26.136: INFO: Number of nodes with available pods: 0 Aug 17 11:53:26.136: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:26.807: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:27.129: INFO: Number of nodes with available pods: 0 Aug 17 11:53:27.129: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:28.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:28.134: INFO: Number of nodes with available pods: 0 Aug 17 11:53:28.134: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:28.768: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:28.772: INFO: Number of nodes with available pods: 0 Aug 17 11:53:28.772: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:30.225: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:30.843: INFO: Number of nodes with available pods: 0 Aug 17 11:53:30.843: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:32.565: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:32.836: INFO: Number of nodes with available pods: 0 Aug 17 11:53:32.836: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:33.846: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:33.851: INFO: Number of nodes with available pods: 0 Aug 17 11:53:33.851: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:34.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:34.778: INFO: Number of nodes with available pods: 0 Aug 17 11:53:34.779: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:35.821: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:36.381: INFO: Number of nodes with available pods: 0 Aug 17 11:53:36.381: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:36.858: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:36.882: INFO: Number of nodes with available pods: 0 Aug 17 11:53:36.882: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:37.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:37.777: INFO: Number of nodes with available pods: 0 Aug 17 11:53:37.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:38.823: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:38.829: INFO: Number of nodes with available pods: 0 Aug 17 11:53:38.829: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:40.247: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:40.495: INFO: Number of nodes with available pods: 0 Aug 17 11:53:40.495: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:40.829: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:41.365: INFO: Number of nodes with available pods: 0 Aug 17 11:53:41.365: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:43.369: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:44.242: INFO: Number of nodes with available pods: 0 Aug 17 11:53:44.242: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:45.234: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:45.251: INFO: Number of nodes with available pods: 0 Aug 17 11:53:45.251: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:46.042: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:46.047: INFO: Number of nodes with available pods: 0 Aug 17 11:53:46.047: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:46.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:46.778: INFO: Number of nodes with available pods: 0 Aug 17 11:53:46.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:49.105: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:49.318: INFO: Number of nodes with available pods: 0 Aug 17 11:53:49.318: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:51.297: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:51.670: INFO: Number of nodes with available pods: 0 Aug 17 11:53:51.670: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:52.426: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:52.431: INFO: Number of nodes with available pods: 0 Aug 17 11:53:52.431: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:52.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:52.777: INFO: Number of nodes with available pods: 0 Aug 17 11:53:52.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:53.999: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:54.019: INFO: Number of nodes with available pods: 0 Aug 17 11:53:54.019: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:54.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:54.780: INFO: Number of nodes with available pods: 0 Aug 17 11:53:54.780: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:55.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:55.777: INFO: Number of nodes with available pods: 0 Aug 17 11:53:55.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:59.557: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:59.582: INFO: Number of nodes with available pods: 0 Aug 17 11:53:59.582: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:53:59.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:53:59.778: INFO: Number of nodes with available pods: 0 Aug 17 11:53:59.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:00.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:00.780: INFO: Number of nodes with available pods: 0 Aug 17 11:54:00.780: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:04.367: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:07.258: INFO: Number of nodes with available pods: 0 Aug 17 11:54:07.258: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:10.682: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:11.865: INFO: Number of nodes with available pods: 0 Aug 17 11:54:11.865: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:13.929: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:14.053: INFO: Number of nodes with available pods: 0 Aug 17 11:54:14.053: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:15.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:15.979: INFO: Number of nodes with available pods: 0 Aug 17 11:54:15.979: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:17.285: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:17.293: INFO: Number of nodes with available pods: 0 Aug 17 11:54:17.294: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:17.769: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:17.774: INFO: Number of nodes with available pods: 0 Aug 17 11:54:17.774: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:18.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:18.805: INFO: Number of nodes with available pods: 0 Aug 17 11:54:18.805: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:20.335: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:20.412: INFO: Number of nodes with available pods: 0 Aug 17 11:54:20.412: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:20.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:20.831: INFO: Number of nodes with available pods: 0 Aug 17 11:54:20.831: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:22.259: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:22.265: INFO: Number of nodes with available pods: 0 Aug 17 11:54:22.265: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:23.049: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:23.111: INFO: Number of nodes with available pods: 0 Aug 17 11:54:23.112: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:24.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:25.408: INFO: Number of nodes with available pods: 0 Aug 17 11:54:25.408: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:27.032: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:27.957: INFO: Number of nodes with available pods: 0 Aug 17 11:54:27.957: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:28.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:28.776: INFO: Number of nodes with available pods: 0 Aug 17 11:54:28.776: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:30.668: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:31.185: INFO: Number of nodes with available pods: 0 Aug 17 11:54:31.185: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:32.431: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:32.436: INFO: Number of nodes with available pods: 0 Aug 17 11:54:32.436: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:32.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:32.778: INFO: Number of nodes with available pods: 0 Aug 17 11:54:32.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:34.265: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:35.057: INFO: Number of nodes with available pods: 0 Aug 17 11:54:35.057: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:35.830: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:36.461: INFO: Number of nodes with available pods: 0 Aug 17 11:54:36.462: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:36.833: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:36.838: INFO: Number of nodes with available pods: 0 Aug 17 11:54:36.838: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:38.142: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:39.284: INFO: Number of nodes with available pods: 0 Aug 17 11:54:39.284: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:40.062: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:40.110: INFO: Number of nodes with available pods: 0 Aug 17 11:54:40.110: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:41.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:41.347: INFO: Number of nodes with available pods: 0 Aug 17 11:54:41.347: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:41.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:41.777: INFO: Number of nodes with available pods: 0 Aug 17 11:54:41.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:42.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:42.906: INFO: Number of nodes with available pods: 0 Aug 17 11:54:42.906: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:43.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:43.777: INFO: Number of nodes with available pods: 0 Aug 17 11:54:43.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:49.925: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:52.038: INFO: Number of nodes with available pods: 0 Aug 17 11:54:52.038: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:53.330: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:53.362: INFO: Number of nodes with available pods: 0 Aug 17 11:54:53.362: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:54.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:54.324: INFO: Number of nodes with available pods: 0 Aug 17 11:54:54.324: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:54.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:54.778: INFO: Number of nodes with available pods: 0 Aug 17 11:54:54.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:56.232: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:56.315: INFO: Number of nodes with available pods: 0 Aug 17 11:54:56.315: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:56.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:56.776: INFO: Number of nodes with available pods: 0 Aug 17 11:54:56.776: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:57.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:57.777: INFO: Number of nodes with available pods: 0 Aug 17 11:54:57.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:58.971: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:59.107: INFO: Number of nodes with available pods: 0 Aug 17 11:54:59.107: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:54:59.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:54:59.778: INFO: Number of nodes with available pods: 0 Aug 17 11:54:59.779: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:01.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:01.238: INFO: Number of nodes with available pods: 0 Aug 17 11:55:01.238: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:01.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:01.776: INFO: Number of nodes with available pods: 0 Aug 17 11:55:01.776: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:04.180: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:04.219: INFO: Number of nodes with available pods: 0 Aug 17 11:55:04.219: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:05.017: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:05.149: INFO: Number of nodes with available pods: 0 Aug 17 11:55:05.149: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:05.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:05.778: INFO: Number of nodes with available pods: 0 Aug 17 11:55:05.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:10.131: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:10.426: INFO: Number of nodes with available pods: 0 Aug 17 11:55:10.426: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:11.629: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:11.636: INFO: Number of nodes with available pods: 0 Aug 17 11:55:11.636: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:11.768: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:11.772: INFO: Number of nodes with available pods: 0 Aug 17 11:55:11.772: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:12.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:12.777: INFO: Number of nodes with available pods: 0 Aug 17 11:55:12.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:14.601: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:14.680: INFO: Number of nodes with available pods: 0 Aug 17 11:55:14.681: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:14.794: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:14.800: INFO: Number of nodes with available pods: 0 Aug 17 11:55:14.800: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:15.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:15.781: INFO: Number of nodes with available pods: 0 Aug 17 11:55:15.781: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:18.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:20.369: INFO: Number of nodes with available pods: 0 Aug 17 11:55:20.370: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:21.549: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:24.598: INFO: Number of nodes with available pods: 0 Aug 17 11:55:24.598: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:24.995: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:25.057: INFO: Number of nodes with available pods: 0 Aug 17 11:55:25.057: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:25.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:25.779: INFO: Number of nodes with available pods: 0 Aug 17 11:55:25.779: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:26.770: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:26.775: INFO: Number of nodes with available pods: 0 Aug 17 11:55:26.775: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:27.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:27.778: INFO: Number of nodes with available pods: 0 Aug 17 11:55:27.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:29.547: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:29.759: INFO: Number of nodes with available pods: 0 Aug 17 11:55:29.759: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:30.455: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:30.501: INFO: Number of nodes with available pods: 0 Aug 17 11:55:30.501: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:30.769: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:30.773: INFO: Number of nodes with available pods: 0 Aug 17 11:55:30.773: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:31.770: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:31.774: INFO: Number of nodes with available pods: 0 Aug 17 11:55:31.774: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:34.088: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:34.444: INFO: Number of nodes with available pods: 0 Aug 17 11:55:34.444: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:34.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:34.777: INFO: Number of nodes with available pods: 0 Aug 17 11:55:34.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:37.259: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:37.672: INFO: Number of nodes with available pods: 0 Aug 17 11:55:37.672: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:38.950: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:39.808: INFO: Number of nodes with available pods: 0 Aug 17 11:55:39.808: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:41.146: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:41.238: INFO: Number of nodes with available pods: 0 Aug 17 11:55:41.238: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:41.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:41.776: INFO: Number of nodes with available pods: 0 Aug 17 11:55:41.776: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:43.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:44.778: INFO: Number of nodes with available pods: 0 Aug 17 11:55:44.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:47.193: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:48.270: INFO: Number of nodes with available pods: 0 Aug 17 11:55:48.270: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:50.812: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:50.934: INFO: Number of nodes with available pods: 0 Aug 17 11:55:50.934: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:52.146: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:52.151: INFO: Number of nodes with available pods: 0 Aug 17 11:55:52.151: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:53.490: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:53.562: INFO: Number of nodes with available pods: 0 Aug 17 11:55:53.562: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:54.914: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:55.461: INFO: Number of nodes with available pods: 0 Aug 17 11:55:55.461: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:56.667: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:56.672: INFO: Number of nodes with available pods: 0 Aug 17 11:55:56.672: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:56.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:56.778: INFO: Number of nodes with available pods: 0 Aug 17 11:55:56.778: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:55:58.182: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:55:58.473: INFO: Number of nodes with available pods: 0 Aug 17 11:55:58.473: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:02.558: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:04.264: INFO: Number of nodes with available pods: 0 Aug 17 11:56:04.264: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:06.807: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:08.047: INFO: Number of nodes with available pods: 0 Aug 17 11:56:08.047: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:08.966: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:08.969: INFO: Number of nodes with available pods: 0 Aug 17 11:56:08.969: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:10.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:11.818: INFO: Number of nodes with available pods: 0 Aug 17 11:56:11.818: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:14.134: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:15.290: INFO: Number of nodes with available pods: 0 Aug 17 11:56:15.290: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:16.649: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:16.984: INFO: Number of nodes with available pods: 0 Aug 17 11:56:16.984: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:17.954: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:18.251: INFO: Number of nodes with available pods: 0 Aug 17 11:56:18.251: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:19.346: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:19.623: INFO: Number of nodes with available pods: 0 Aug 17 11:56:19.623: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:20.160: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:20.245: INFO: Number of nodes with available pods: 0 Aug 17 11:56:20.245: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:21.068: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:21.557: INFO: Number of nodes with available pods: 0 Aug 17 11:56:21.557: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:21.770: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:21.775: INFO: Number of nodes with available pods: 0 Aug 17 11:56:21.775: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:23.100: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:23.695: INFO: Number of nodes with available pods: 0 Aug 17 11:56:23.695: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:24.961: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:25.259: INFO: Number of nodes with available pods: 0 Aug 17 11:56:25.259: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:27.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:27.767: INFO: Number of nodes with available pods: 0 Aug 17 11:56:27.767: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:30.144: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:30.751: INFO: Number of nodes with available pods: 0 Aug 17 11:56:30.751: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:31.148: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:32.006: INFO: Number of nodes with available pods: 0 Aug 17 11:56:32.006: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:32.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:32.777: INFO: Number of nodes with available pods: 0 Aug 17 11:56:32.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:33.913: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:34.714: INFO: Number of nodes with available pods: 0 Aug 17 11:56:34.714: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:35.136: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:35.141: INFO: Number of nodes with available pods: 0 Aug 17 11:56:35.141: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:36.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:36.338: INFO: Number of nodes with available pods: 0 Aug 17 11:56:36.338: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:36.782: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:37.576: INFO: Number of nodes with available pods: 0 Aug 17 11:56:37.576: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:38.127: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:38.605: INFO: Number of nodes with available pods: 0 Aug 17 11:56:38.605: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:39.156: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:39.244: INFO: Number of nodes with available pods: 0 Aug 17 11:56:39.244: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:39.872: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:39.877: INFO: Number of nodes with available pods: 0 Aug 17 11:56:39.877: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:40.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:40.777: INFO: Number of nodes with available pods: 0 Aug 17 11:56:40.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:42.644: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:43.192: INFO: Number of nodes with available pods: 0 Aug 17 11:56:43.192: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:44.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:44.579: INFO: Number of nodes with available pods: 0 Aug 17 11:56:44.579: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:45.139: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:46.090: INFO: Number of nodes with available pods: 0 Aug 17 11:56:46.090: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:46.768: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:46.773: INFO: Number of nodes with available pods: 0 Aug 17 11:56:46.773: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:47.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:48.108: INFO: Number of nodes with available pods: 0 Aug 17 11:56:48.108: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:48.844: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:48.875: INFO: Number of nodes with available pods: 0 Aug 17 11:56:48.875: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:49.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:49.777: INFO: Number of nodes with available pods: 0 Aug 17 11:56:49.777: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:51.358: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:51.817: INFO: Number of nodes with available pods: 0 Aug 17 11:56:51.817: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:52.070: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 11:56:52.075: INFO: Number of nodes with available pods: 0 Aug 17 11:56:52.075: INFO: Node latest-worker is running more than one daemon pod Aug 17 11:56:52.078: FAIL: error waiting for daemon pod to start Unexpected error: <*errors.errorString | 0x40002761f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func3.6() /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:291 +0x3e4 k8s.io/kubernetes/test/e2e.RunE2ETests(0x4002409500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x320 k8s.io/kubernetes/test/e2e.TestE2E(0x4002409500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x28 testing.tRunner(0x4002409500, 0x44e5dc0) /usr/local/go/src/testing/testing.go:1108 +0xdc created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1159 +0x2ec [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2713, will wait for the garbage collector to delete the pods Aug 17 11:56:52.875: INFO: Deleting DaemonSet.extensions daemon-set took: 226.937648ms Aug 17 11:56:54.676: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.800454331s Aug 17 11:57:20.082: INFO: Number of nodes with available pods: 0 Aug 17 11:57:20.082: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 11:57:20.086: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2713/daemonsets","resourceVersion":"714952"},"items":null} Aug 17 11:57:20.089: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2713/pods","resourceVersion":"714952"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "daemonsets-2713". STEP: Found 8 events. Aug 17 11:57:20.164: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for daemon-set-64g72: { } Scheduled: Successfully assigned daemonsets-2713/daemon-set-64g72 to latest-worker2 Aug 17 11:57:20.164: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for daemon-set-hxlxj: { } Scheduled: Successfully assigned daemonsets-2713/daemon-set-hxlxj to latest-worker Aug 17 11:57:20.164: INFO: At 2020-08-17 11:51:51 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-hxlxj Aug 17 11:57:20.164: INFO: At 2020-08-17 11:51:51 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-64g72 Aug 17 11:57:20.164: INFO: At 2020-08-17 11:55:52 +0000 UTC - event for daemon-set-64g72: {kubelet latest-worker2} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Aug 17 11:57:20.164: INFO: At 2020-08-17 11:55:52 +0000 UTC - event for daemon-set-hxlxj: {kubelet latest-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Aug 17 11:57:20.164: INFO: At 2020-08-17 11:57:17 +0000 UTC - event for daemon-set-64g72: {kubelet latest-worker2} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:459: container init caused:: unknown Aug 17 11:57:20.164: INFO: At 2020-08-17 11:57:17 +0000 UTC - event for daemon-set-hxlxj: {kubelet latest-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:459: container init caused:: unknown Aug 17 11:57:20.166: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 11:57:20.166: INFO: Aug 17 11:57:20.174: INFO: Logging node info for node latest-control-plane Aug 17 11:57:20.177: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane e5265ef7-4fee-44e7-b227-c9d0aff11127 714618 0 2020-08-15 09:42:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-08-15 09:42:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-08-15 09:42:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-08-17 11:54:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-17 11:54:37 +0000 UTC,LastTransitionTime:2020-08-15 09:41:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-17 11:54:37 +0000 UTC,LastTransitionTime:2020-08-15 09:41:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-17 11:54:37 +0000 UTC,LastTransitionTime:2020-08-15 09:41:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-17 11:54:37 +0000 UTC,LastTransitionTime:2020-08-15 09:42:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.12,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:355da13825784523b4a253c23edd1334,SystemUUID:8f367e0f-042b-45ff-9966-5ca6bcc1cc56,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.19.0-rc.1,KubeProxyVersion:v1.19.0-rc.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.7-0],SizeBytes:299470271,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.19.0-rc.1],SizeBytes:137937533,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.19.0-rc.1],SizeBytes:101224746,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.19.0-rc.1],SizeBytes:87920444,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.19.0-rc.1],SizeBytes:67843882,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 17 11:57:20.179: INFO: Logging kubelet events for node latest-control-plane Aug 17 11:57:20.182: INFO: Logging pods the kubelet thinks is on node latest-control-plane Aug 17 11:57:20.220: INFO: coredns-f9fd979d6-f7hdg started at 2020-08-15 09:42:39 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.220: INFO: Container coredns ready: true, restart count 0 Aug 17 11:57:20.220: INFO: coredns-f9fd979d6-vxzgb started at 2020-08-15 09:42:40 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.220: INFO: Container coredns ready: true, restart count 0 Aug 17 11:57:20.220: INFO: kube-apiserver-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.220: INFO: Container kube-apiserver ready: true, restart count 0 Aug 17 11:57:20.220: INFO: kube-scheduler-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.220: INFO: Container kube-scheduler ready: true, restart count 4 Aug 17 11:57:20.220: INFO: kindnet-qmj2d started at 2020-08-15 09:42:20 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.220: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 11:57:20.220: INFO: local-path-provisioner-8b46957d4-csnr8 started at 2020-08-15 09:42:41 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.220: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 17 11:57:20.220: INFO: etcd-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.220: INFO: Container etcd ready: true, restart count 0 Aug 17 11:57:20.220: INFO: kube-controller-manager-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.220: INFO: Container kube-controller-manager ready: true, restart count 8 Aug 17 11:57:20.220: INFO: kube-proxy-8zfjc started at 2020-08-15 09:42:20 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.220: INFO: Container kube-proxy ready: true, restart count 0 W0817 11:57:20.236082 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 11:57:20.331: INFO: Latency metrics for node latest-control-plane Aug 17 11:57:20.331: INFO: Logging node info for node latest-worker Aug 17 11:57:20.335: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 004fc98a-1b9f-43ac-98e7-5d7f7d4d062a 714528 0 2020-08-15 09:42:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2020-08-15 09:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubeadm Update v1 2020-08-15 09:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2020-08-17 11:42:19 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2020-08-17 11:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-17 11:53:50 +0000 UTC,LastTransitionTime:2020-08-15 09:42:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-17 11:53:50 +0000 UTC,LastTransitionTime:2020-08-15 09:42:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-17 11:53:50 +0000 UTC,LastTransitionTime:2020-08-15 09:42:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-17 11:53:50 +0000 UTC,LastTransitionTime:2020-08-15 09:43:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.11,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4962fc9ace3b4cf98891488fcb5c4ee6,SystemUUID:b6eda539-1b1b-4e57-b392-83081398c987,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.19.0-rc.1,KubeProxyVersion:v1.19.0-rc.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.7-0],SizeBytes:299470271,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.19.0-rc.1],SizeBytes:137937533,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:4ba7f14019eaf22c4aa0095ebbce463fcbf2e2074f6dae826634ec7ce7a876e9 docker.io/aquasec/kube-hunter:latest],SizeBytes:117083310,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.19.0-rc.1],SizeBytes:101224746,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.19.0-rc.1],SizeBytes:87920444,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:735f090b15d5efc576da1602d8c678bf39a7605c0718ed915daec8f2297db2ff k8s.gcr.io/etcd:3.4.9],SizeBytes:86734312,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.19.0-rc.1],SizeBytes:67843882,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d7dc3a4976d3bae4597677cbe5f9105877f4287771e555cd9b5c0fbca6105db6 docker.io/aquasec/kube-bench:latest],SizeBytes:8030821,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 17 11:57:20.338: INFO: Logging kubelet events for node latest-worker Aug 17 11:57:20.341: INFO: Logging pods the kubelet thinks is on node latest-worker Aug 17 11:57:20.363: INFO: kube-proxy-82wrf started at 2020-08-15 09:42:30 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.363: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 11:57:20.363: INFO: kindnet-gmpqb started at 2020-08-15 09:42:30 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.363: INFO: Container kindnet-cni ready: true, restart count 0 W0817 11:57:20.375637 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 11:57:20.466: INFO: Latency metrics for node latest-worker Aug 17 11:57:20.466: INFO: Logging node info for node latest-worker2 Aug 17 11:57:20.581: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 0e8bca53-43cd-4827-990c-d232e1852e08 714639 0 2020-08-15 09:42:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2020-08-15 09:42:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubeadm Update v1 2020-08-15 09:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kubelet Update v1 2020-08-17 11:54:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-17 11:54:47 +0000 UTC,LastTransitionTime:2020-08-15 09:42:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-17 11:54:47 +0000 UTC,LastTransitionTime:2020-08-15 09:42:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-17 11:54:47 +0000 UTC,LastTransitionTime:2020-08-15 09:42:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-17 11:54:47 +0000 UTC,LastTransitionTime:2020-08-15 09:42:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c01f9d6dc3c84901a8eec574df183c82,SystemUUID:9c567046-ce77-43e5-9100-5388d15772fe,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.19.0-rc.1,KubeProxyVersion:v1.19.0-rc.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7 k8s.gcr.io/etcd:3.4.7-0],SizeBytes:299470271,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.19.0-rc.1],SizeBytes:137937533,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:4ba7f14019eaf22c4aa0095ebbce463fcbf2e2074f6dae826634ec7ce7a876e9 docker.io/aquasec/kube-hunter:latest],SizeBytes:117083310,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.19.0-rc.1],SizeBytes:101224746,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.19.0-rc.1],SizeBytes:87920444,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:735f090b15d5efc576da1602d8c678bf39a7605c0718ed915daec8f2297db2ff k8s.gcr.io/etcd:3.4.9],SizeBytes:86734312,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.19.0-rc.1],SizeBytes:67843882,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d7dc3a4976d3bae4597677cbe5f9105877f4287771e555cd9b5c0fbca6105db6 docker.io/aquasec/kube-bench:latest],SizeBytes:8030821,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 17 11:57:20.583: INFO: Logging kubelet events for node latest-worker2 Aug 17 11:57:20.586: INFO: Logging pods the kubelet thinks is on node latest-worker2 Aug 17 11:57:20.611: INFO: kube-proxy-fjk8r started at 2020-08-15 09:42:29 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.612: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 11:57:20.612: INFO: kindnet-grzzh started at 2020-08-15 09:42:30 +0000 UTC (0+1 container statuses recorded) Aug 17 11:57:20.612: INFO: Container kindnet-cni ready: true, restart count 0 W0817 11:57:20.627269 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 11:57:20.713: INFO: Latency metrics for node latest-worker2 Aug 17 11:57:20.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2713" for this suite. • Failure [331.986 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] [It] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:56:52.078: error waiting for daemon pod to start Unexpected error: <*errors.errorString | 0x40002761f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:291 ------------------------------ {"msg":"FAILED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":91,"skipped":1643,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:57:21.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1955 STEP: creating service affinity-clusterip-transition in namespace services-1955 STEP: creating replication controller affinity-clusterip-transition in namespace services-1955 I0817 11:57:21.732104 10 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1955, replica count: 3 I0817 11:57:24.783540 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:57:27.784146 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:57:30.784858 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 11:57:33.785454 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 11:57:34.150: INFO: Creating new exec pod Aug 17 11:57:41.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1955 execpod-affinity7zgv8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Aug 17 11:57:55.774: INFO: stderr: "I0817 11:57:55.641387 1013 log.go:181] (0x400028e0b0) (0x40005a6000) Create stream\nI0817 11:57:55.644490 1013 log.go:181] (0x400028e0b0) (0x40005a6000) Stream added, broadcasting: 1\nI0817 11:57:55.655952 1013 log.go:181] (0x400028e0b0) Reply frame received for 1\nI0817 11:57:55.656977 1013 log.go:181] (0x400028e0b0) (0x40005a6140) Create stream\nI0817 11:57:55.657174 1013 log.go:181] (0x400028e0b0) (0x40005a6140) Stream added, broadcasting: 3\nI0817 11:57:55.658697 1013 log.go:181] (0x400028e0b0) Reply frame received for 3\nI0817 11:57:55.658941 1013 log.go:181] (0x400028e0b0) (0x4000d220a0) Create stream\nI0817 11:57:55.659013 1013 log.go:181] (0x400028e0b0) (0x4000d220a0) Stream added, broadcasting: 5\nI0817 11:57:55.660598 1013 log.go:181] (0x400028e0b0) Reply frame received for 5\nI0817 11:57:55.757442 1013 log.go:181] (0x400028e0b0) Data frame received for 3\nI0817 11:57:55.757677 1013 log.go:181] (0x400028e0b0) Data frame received for 5\nI0817 11:57:55.757751 1013 log.go:181] (0x4000d220a0) (5) Data frame handling\nI0817 11:57:55.757840 1013 log.go:181] (0x40005a6140) (3) Data frame handling\nI0817 11:57:55.758279 1013 log.go:181] (0x400028e0b0) Data frame received for 1\nI0817 11:57:55.758352 1013 log.go:181] (0x40005a6000) (1) Data frame handling\nI0817 11:57:55.759136 1013 log.go:181] (0x4000d220a0) (5) Data frame sent\nI0817 11:57:55.759488 1013 log.go:181] (0x400028e0b0) Data frame received for 5\nI0817 11:57:55.759575 1013 log.go:181] (0x4000d220a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0817 11:57:55.759653 1013 log.go:181] (0x40005a6000) (1) Data frame sent\nI0817 11:57:55.760388 1013 log.go:181] (0x400028e0b0) (0x40005a6000) Stream removed, broadcasting: 1\nI0817 11:57:55.763864 1013 log.go:181] (0x400028e0b0) (0x40005a6000) Stream removed, broadcasting: 1\nI0817 11:57:55.764195 1013 log.go:181] (0x400028e0b0) (0x40005a6140) Stream removed, broadcasting: 3\nI0817 11:57:55.764535 1013 log.go:181] (0x400028e0b0) (0x4000d220a0) Stream removed, broadcasting: 5\n" Aug 17 11:57:55.775: INFO: stdout: "" Aug 17 11:57:55.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1955 execpod-affinity7zgv8 -- /bin/sh -x -c nc -zv -t -w 2 10.109.76.161 80' Aug 17 11:57:57.545: INFO: stderr: "I0817 11:57:57.443243 1034 log.go:181] (0x40006b42c0) (0x40006e4640) Create stream\nI0817 11:57:57.448087 1034 log.go:181] (0x40006b42c0) (0x40006e4640) Stream added, broadcasting: 1\nI0817 11:57:57.460452 1034 log.go:181] (0x40006b42c0) Reply frame received for 1\nI0817 11:57:57.461168 1034 log.go:181] (0x40006b42c0) (0x40006e46e0) Create stream\nI0817 11:57:57.461230 1034 log.go:181] (0x40006b42c0) (0x40006e46e0) Stream added, broadcasting: 3\nI0817 11:57:57.463001 1034 log.go:181] (0x40006b42c0) Reply frame received for 3\nI0817 11:57:57.463450 1034 log.go:181] (0x40006b42c0) (0x40006e4780) Create stream\nI0817 11:57:57.463540 1034 log.go:181] (0x40006b42c0) (0x40006e4780) Stream added, broadcasting: 5\nI0817 11:57:57.464984 1034 log.go:181] (0x40006b42c0) Reply frame received for 5\nI0817 11:57:57.523231 1034 log.go:181] (0x40006b42c0) Data frame received for 5\nI0817 11:57:57.523473 1034 log.go:181] (0x40006e4780) (5) Data frame handling\nI0817 11:57:57.524023 1034 log.go:181] (0x40006e4780) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.76.161 80\nConnection to 10.109.76.161 80 port [tcp/http] succeeded!\nI0817 11:57:57.524558 1034 log.go:181] (0x40006b42c0) Data frame received for 3\nI0817 11:57:57.524656 1034 log.go:181] (0x40006e46e0) (3) Data frame handling\nI0817 11:57:57.525276 1034 log.go:181] (0x40006b42c0) Data frame received for 5\nI0817 11:57:57.525359 1034 log.go:181] (0x40006e4780) (5) Data frame handling\nI0817 11:57:57.525736 1034 log.go:181] (0x40006b42c0) Data frame received for 1\nI0817 11:57:57.525846 1034 log.go:181] (0x40006e4640) (1) Data frame handling\nI0817 11:57:57.525969 1034 log.go:181] (0x40006e4640) (1) Data frame sent\nI0817 11:57:57.527828 1034 log.go:181] (0x40006b42c0) (0x40006e4640) Stream removed, broadcasting: 1\nI0817 11:57:57.530181 1034 log.go:181] (0x40006b42c0) Go away received\nI0817 11:57:57.532945 1034 log.go:181] (0x40006b42c0) (0x40006e4640) Stream removed, broadcasting: 1\nI0817 11:57:57.534982 1034 log.go:181] (0x40006b42c0) (0x40006e46e0) Stream removed, broadcasting: 3\nI0817 11:57:57.535162 1034 log.go:181] (0x40006b42c0) (0x40006e4780) Stream removed, broadcasting: 5\n" Aug 17 11:57:57.546: INFO: stdout: "" Aug 17 11:57:57.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1955 execpod-affinity7zgv8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.109.76.161:80/ ; done' Aug 17 11:57:59.374: INFO: stderr: "I0817 11:57:59.183157 1054 log.go:181] (0x40000eae70) (0x4000bf23c0) Create stream\nI0817 11:57:59.186724 1054 log.go:181] (0x40000eae70) (0x4000bf23c0) Stream added, broadcasting: 1\nI0817 11:57:59.195534 1054 log.go:181] (0x40000eae70) Reply frame received for 1\nI0817 11:57:59.196170 1054 log.go:181] (0x40000eae70) (0x4000719680) Create stream\nI0817 11:57:59.196260 1054 log.go:181] (0x40000eae70) (0x4000719680) Stream added, broadcasting: 3\nI0817 11:57:59.197706 1054 log.go:181] (0x40000eae70) Reply frame received for 3\nI0817 11:57:59.197965 1054 log.go:181] (0x40000eae70) (0x4000bf2500) Create stream\nI0817 11:57:59.198023 1054 log.go:181] (0x40000eae70) (0x4000bf2500) Stream added, broadcasting: 5\nI0817 11:57:59.199103 1054 log.go:181] (0x40000eae70) Reply frame received for 5\nI0817 11:57:59.269854 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.270081 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.270293 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.270438 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.270546 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.270705 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.273832 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.273901 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.273976 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.274668 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.274760 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\n+ echo\nI0817 11:57:59.274823 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.274910 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.274986 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.275051 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\nI0817 11:57:59.275114 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.275172 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.275245 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.278838 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.278974 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.279061 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.279192 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.279307 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.279418 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\nI0817 11:57:59.282928 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.283007 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.283108 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.284005 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.284090 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.284178 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.284262 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.284376 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.284481 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.288617 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.288680 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.288819 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.289073 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.289166 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.289224 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.289294 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.289372 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\nI0817 11:57:59.289433 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.293854 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.293931 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.294023 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.294100 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.294161 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.294227 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.295625 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.295709 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.295791 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.297688 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.297781 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.297860 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.298131 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.298208 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.298263 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.298322 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.298377 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.298460 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.302306 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.302376 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.302473 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.302769 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.302857 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.302968 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.303059 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.303145 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\nI0817 11:57:59.303215 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.308125 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.308178 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.308253 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.308499 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.308568 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.308629 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.308685 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.308882 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.308959 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.313303 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.313367 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.313445 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.313625 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.313700 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.313749 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.313817 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.313878 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.313940 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.318210 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.318373 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.318469 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.318590 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.318677 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.318772 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.318845 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\nI0817 11:57:59.318951 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.319060 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.323540 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.323619 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.323701 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.323800 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.323842 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.323893 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.323961 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.324037 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.324123 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.327191 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.327264 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.327332 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.327844 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.327932 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.327993 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.328053 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.328091 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.328137 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.332163 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.332267 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.332372 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.332884 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.332961 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.333057 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.333146 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.333236 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.333338 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.341293 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.341369 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.341454 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.342111 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.342231 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.342293 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.342362 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.342421 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\nI0817 11:57:59.342478 1054 log.go:181] (0x40000eae70) Data frame received for 5\n+ echo\nI0817 11:57:59.342547 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.342608 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.342768 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\nI0817 11:57:59.348925 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.348987 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.349059 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.349860 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.349934 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.350012 1054 log.go:181] (0x4000bf2500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:57:59.350095 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.350197 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.350284 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.355345 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.355431 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.355503 1054 log.go:181] (0x4000719680) (3) Data frame sent\nI0817 11:57:59.356090 1054 log.go:181] (0x40000eae70) Data frame received for 3\nI0817 11:57:59.356163 1054 log.go:181] (0x4000719680) (3) Data frame handling\nI0817 11:57:59.356257 1054 log.go:181] (0x40000eae70) Data frame received for 5\nI0817 11:57:59.356317 1054 log.go:181] (0x4000bf2500) (5) Data frame handling\nI0817 11:57:59.358574 1054 log.go:181] (0x40000eae70) Data frame received for 1\nI0817 11:57:59.358622 1054 log.go:181] (0x4000bf23c0) (1) Data frame handling\nI0817 11:57:59.358673 1054 log.go:181] (0x4000bf23c0) (1) Data frame sent\nI0817 11:57:59.359429 1054 log.go:181] (0x40000eae70) (0x4000bf23c0) Stream removed, broadcasting: 1\nI0817 11:57:59.361164 1054 log.go:181] (0x40000eae70) Go away received\nI0817 11:57:59.363660 1054 log.go:181] (0x40000eae70) (0x4000bf23c0) Stream removed, broadcasting: 1\nI0817 11:57:59.364040 1054 log.go:181] (0x40000eae70) (0x4000719680) Stream removed, broadcasting: 3\nI0817 11:57:59.365703 1054 log.go:181] (0x40000eae70) (0x4000bf2500) Stream removed, broadcasting: 5\n" Aug 17 11:57:59.378: INFO: stdout: "\naffinity-clusterip-transition-jwsrs\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-7gnqr\naffinity-clusterip-transition-7gnqr\naffinity-clusterip-transition-jwsrs\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-7gnqr\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-7gnqr\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-7gnqr\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj" Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-jwsrs Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-7gnqr Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-7gnqr Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-jwsrs Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-7gnqr Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-7gnqr Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-7gnqr Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:57:59.378: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:57:59.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1955 execpod-affinity7zgv8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.109.76.161:80/ ; done' Aug 17 11:58:01.161: INFO: stderr: "I0817 11:58:00.961324 1074 log.go:181] (0x400003a840) (0x4000b28460) Create stream\nI0817 11:58:00.972124 1074 log.go:181] (0x400003a840) (0x4000b28460) Stream added, broadcasting: 1\nI0817 11:58:00.982339 1074 log.go:181] (0x400003a840) Reply frame received for 1\nI0817 11:58:00.983366 1074 log.go:181] (0x400003a840) (0x4000516320) Create stream\nI0817 11:58:00.983461 1074 log.go:181] (0x400003a840) (0x4000516320) Stream added, broadcasting: 3\nI0817 11:58:00.985009 1074 log.go:181] (0x400003a840) Reply frame received for 3\nI0817 11:58:00.985299 1074 log.go:181] (0x400003a840) (0x40004c4000) Create stream\nI0817 11:58:00.985369 1074 log.go:181] (0x400003a840) (0x40004c4000) Stream added, broadcasting: 5\nI0817 11:58:00.986721 1074 log.go:181] (0x400003a840) Reply frame received for 5\nI0817 11:58:01.067147 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.067408 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.067523 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.067617 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.068213 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.068405 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.069836 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.069930 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.070046 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.070409 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.070466 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.070528 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.070779 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.070848 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.070922 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.074553 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.074630 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.074704 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.075079 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.075163 1074 log.go:181] (0x40004c4000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.075247 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.075328 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.075394 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.075471 1074 log.go:181] (0x40004c4000) (5) Data frame sent\nI0817 11:58:01.078008 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.078077 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.078162 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.078278 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.078371 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.078430 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.078493 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.078559 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.078600 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.081891 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.081941 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.081988 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.082285 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.082341 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.082396 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.082455 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.082498 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.082552 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.089719 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.089831 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.089923 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.090155 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.090222 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.090297 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ echo\n+ curl -qI0817 11:58:01.091038 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.091143 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.091225 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.091871 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.091920 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.091972 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.094978 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.095055 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.095123 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.096802 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.096866 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.096918 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0817 11:58:01.096973 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.097016 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.097064 1074 log.go:181] (0x40004c4000) (5) Data frame sent\nI0817 11:58:01.097117 1074 log.go:181] (0x400003a840) Data frame received for 3\n 2 http://10.109.76.161:80/\nI0817 11:58:01.097161 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.097210 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.101461 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.101506 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.101552 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.102101 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.102184 1074 log.go:181] (0x40004c4000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0817 11:58:01.102254 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.102334 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.102415 1074 log.go:181] (0x40004c4000) (5) Data frame sent\nI0817 11:58:01.102497 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.102560 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.102635 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n http://10.109.76.161:80/\nI0817 11:58:01.102703 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.105808 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.105856 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.105917 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.106457 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.106508 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.106549 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.106596 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.106633 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.106673 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.110514 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.110618 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.110727 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.111159 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.111260 1074 log.go:181] (0x40004c4000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.111368 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.111485 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.111579 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.111674 1074 log.go:181] (0x40004c4000) (5) Data frame sent\nI0817 11:58:01.114345 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.114427 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.114530 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.114835 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.114897 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.114949 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.114999 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.115037 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.115085 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.118365 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.118468 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.118555 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.118662 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.118743 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.118812 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.118872 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.118931 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.119006 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.121921 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.121973 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.122032 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.122495 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.122550 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.122591 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.122637 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.122687 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.122757 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.126716 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.126786 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.126865 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.127103 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.127206 1074 log.go:181] (0x40004c4000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.127284 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.127367 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.127437 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.127497 1074 log.go:181] (0x40004c4000) (5) Data frame sent\nI0817 11:58:01.130390 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.130456 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.130541 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.130704 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.130765 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.130849 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.130912 1074 log.go:181] (0x40004c4000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.131004 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.131078 1074 log.go:181] (0x40004c4000) (5) Data frame sent\nI0817 11:58:01.137819 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.137899 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.137976 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.138134 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.138179 1074 log.go:181] (0x40004c4000) (5) Data frame handling\n+ echo\nI0817 11:58:01.138237 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.138335 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.138419 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.138488 1074 log.go:181] (0x40004c4000) (5) Data frame sent\nI0817 11:58:01.138555 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.138632 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.138725 1074 log.go:181] (0x40004c4000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.109.76.161:80/\nI0817 11:58:01.144271 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.144356 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.144446 1074 log.go:181] (0x4000516320) (3) Data frame sent\nI0817 11:58:01.144674 1074 log.go:181] (0x400003a840) Data frame received for 3\nI0817 11:58:01.144824 1074 log.go:181] (0x4000516320) (3) Data frame handling\nI0817 11:58:01.145271 1074 log.go:181] (0x400003a840) Data frame received for 5\nI0817 11:58:01.145433 1074 log.go:181] (0x40004c4000) (5) Data frame handling\nI0817 11:58:01.146739 1074 log.go:181] (0x400003a840) Data frame received for 1\nI0817 11:58:01.146805 1074 log.go:181] (0x4000b28460) (1) Data frame handling\nI0817 11:58:01.146876 1074 log.go:181] (0x4000b28460) (1) Data frame sent\nI0817 11:58:01.147634 1074 log.go:181] (0x400003a840) (0x4000b28460) Stream removed, broadcasting: 1\nI0817 11:58:01.150033 1074 log.go:181] (0x400003a840) Go away received\nI0817 11:58:01.152106 1074 log.go:181] (0x400003a840) (0x4000b28460) Stream removed, broadcasting: 1\nI0817 11:58:01.152491 1074 log.go:181] (0x400003a840) (0x4000516320) Stream removed, broadcasting: 3\nI0817 11:58:01.152992 1074 log.go:181] (0x400003a840) (0x40004c4000) Stream removed, broadcasting: 5\n" Aug 17 11:58:01.166: INFO: stdout: "\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj\naffinity-clusterip-transition-2dcqj" Aug 17 11:58:01.166: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.166: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Received response from host: affinity-clusterip-transition-2dcqj Aug 17 11:58:01.167: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-1955, will wait for the garbage collector to delete the pods Aug 17 11:58:01.405: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.085129ms Aug 17 11:58:02.506: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 1.100553688s [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:58:21.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1955" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:60.297 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":92,"skipped":1667,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:58:21.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9612.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9612.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9612.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9612.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 69.135.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.135.69_udp@PTR;check="$$(dig +tcp +noall +answer +search 69.135.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.135.69_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9612.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9612.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9612.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9612.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9612.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9612.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9612.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 69.135.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.135.69_udp@PTR;check="$$(dig +tcp +noall +answer +search 69.135.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.135.69_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 11:58:38.212: INFO: Unable to read wheezy_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:38.385: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:38.390: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:38.394: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:38.412: INFO: Unable to read jessie_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:38.416: INFO: Unable to read jessie_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:38.419: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:38.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:38.445: INFO: Lookups using dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f failed for: [wheezy_udp@dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local jessie_udp@dns-test-service.dns-9612.svc.cluster.local jessie_tcp@dns-test-service.dns-9612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local] Aug 17 11:58:43.543: INFO: Unable to read wheezy_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:43.883: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:43.888: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:43.891: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:43.916: INFO: Unable to read jessie_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:43.920: INFO: Unable to read jessie_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:43.924: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:43.927: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:43.946: INFO: Lookups using dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f failed for: [wheezy_udp@dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local jessie_udp@dns-test-service.dns-9612.svc.cluster.local jessie_tcp@dns-test-service.dns-9612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local] Aug 17 11:58:49.078: INFO: Unable to read wheezy_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:49.349: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:49.422: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:49.432: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:49.489: INFO: Unable to read jessie_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:49.491: INFO: Unable to read jessie_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:49.494: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:49.496: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:49.512: INFO: Lookups using dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f failed for: [wheezy_udp@dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local jessie_udp@dns-test-service.dns-9612.svc.cluster.local jessie_tcp@dns-test-service.dns-9612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local] Aug 17 11:58:53.451: INFO: Unable to read wheezy_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:53.456: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:53.460: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:53.464: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:53.790: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: Get "https://172.30.12.66:45453/api/v1/namespaces/dns-9612/pods/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f/proxy/results/wheezy_udp@_http._tcp.test-service-2.dns-9612.svc.cluster.local": stream error: stream ID 5727; INTERNAL_ERROR Aug 17 11:58:54.766: INFO: Unable to read jessie_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:54.769: INFO: Unable to read jessie_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:54.773: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:54.778: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:54.899: INFO: Lookups using dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f failed for: [wheezy_udp@dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9612.svc.cluster.local jessie_udp@dns-test-service.dns-9612.svc.cluster.local jessie_tcp@dns-test-service.dns-9612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local] Aug 17 11:58:58.450: INFO: Unable to read wheezy_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:58.454: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:58.458: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:58.461: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:58.483: INFO: Unable to read jessie_udp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:58.486: INFO: Unable to read jessie_tcp@dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:58.489: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:58.534: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local from pod dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f: the server could not find the requested resource (get pods dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f) Aug 17 11:58:58.553: INFO: Lookups using dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f failed for: [wheezy_udp@dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@dns-test-service.dns-9612.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local jessie_udp@dns-test-service.dns-9612.svc.cluster.local jessie_tcp@dns-test-service.dns-9612.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9612.svc.cluster.local] Aug 17 11:59:05.642: INFO: DNS probes using dns-9612/dns-test-7d74966c-7f0d-4fb1-b6d6-a1c6cd27dd8f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:59:07.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9612" for this suite. • [SLOW TEST:45.879 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":93,"skipped":1704,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:59:07.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:59:08.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6531" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":94,"skipped":1718,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:59:08.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 17 11:59:09.591: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 17 11:59:14.674: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 11:59:15.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7838" for this suite. • [SLOW TEST:6.473 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":95,"skipped":1742,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 11:59:15.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 11:59:15.385: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 17 11:59:26.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7711 create -f -' Aug 17 11:59:44.028: INFO: stderr: "" Aug 17 11:59:44.028: INFO: stdout: "e2e-test-crd-publish-openapi-1616-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 17 11:59:44.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7711 delete e2e-test-crd-publish-openapi-1616-crds test-cr' Aug 17 11:59:45.429: INFO: stderr: "" Aug 17 11:59:45.429: INFO: stdout: "e2e-test-crd-publish-openapi-1616-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 17 11:59:45.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7711 apply -f -' Aug 17 11:59:47.753: INFO: stderr: "" Aug 17 11:59:47.753: INFO: stdout: "e2e-test-crd-publish-openapi-1616-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 17 11:59:47.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7711 delete e2e-test-crd-publish-openapi-1616-crds test-cr' Aug 17 11:59:49.669: INFO: stderr: "" Aug 17 11:59:49.669: INFO: stdout: "e2e-test-crd-publish-openapi-1616-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 17 11:59:49.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1616-crds' Aug 17 11:59:57.072: INFO: stderr: "" Aug 17 11:59:57.072: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1616-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:00:18.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7711" for this suite. • [SLOW TEST:64.673 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":96,"skipped":1742,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:00:19.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-hr7c STEP: Creating a pod to test atomic-volume-subpath Aug 17 12:00:20.915: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-hr7c" in namespace "subpath-3648" to be "Succeeded or Failed" Aug 17 12:00:20.947: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.219565ms Aug 17 12:00:23.028: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113153246s Aug 17 12:00:25.034: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118875467s Aug 17 12:00:27.042: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 6.127294302s Aug 17 12:00:29.049: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 8.133655323s Aug 17 12:00:31.436: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 10.521517632s Aug 17 12:00:33.443: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 12.528457397s Aug 17 12:00:35.448: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 14.533183922s Aug 17 12:00:37.454: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 16.538891642s Aug 17 12:00:39.460: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 18.545169712s Aug 17 12:00:41.501: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 20.586269472s Aug 17 12:00:43.508: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 22.593009205s Aug 17 12:00:45.515: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Running", Reason="", readiness=true. Elapsed: 24.599693165s Aug 17 12:00:47.522: INFO: Pod "pod-subpath-test-projected-hr7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.606873244s STEP: Saw pod success Aug 17 12:00:47.522: INFO: Pod "pod-subpath-test-projected-hr7c" satisfied condition "Succeeded or Failed" Aug 17 12:00:47.527: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-hr7c container test-container-subpath-projected-hr7c: STEP: delete the pod Aug 17 12:00:47.755: INFO: Waiting for pod pod-subpath-test-projected-hr7c to disappear Aug 17 12:00:47.775: INFO: Pod pod-subpath-test-projected-hr7c no longer exists STEP: Deleting pod pod-subpath-test-projected-hr7c Aug 17 12:00:47.775: INFO: Deleting pod "pod-subpath-test-projected-hr7c" in namespace "subpath-3648" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:00:48.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3648" for this suite. • [SLOW TEST:28.316 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":97,"skipped":1743,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:00:48.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:00:48.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5995" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":98,"skipped":1743,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:00:48.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 12:00:51.063: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 12:00:53.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262451, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262451, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262451, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262451, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:00:55.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262451, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262451, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262451, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262451, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:00:58.535: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:00:58.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2712-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:01:00.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2988" for this suite. STEP: Destroying namespace "webhook-2988-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.793 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":99,"skipped":1758,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:01:00.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 12:01:05.163: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 12:01:07.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262465, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262465, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262465, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262465, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:01:09.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262465, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262465, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262465, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262465, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:01:13.044: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:01:13.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7740" for this suite. STEP: Destroying namespace "webhook-7740-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.699 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":100,"skipped":1779,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:01:14.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7358137c-f269-43fa-9ccc-609432e9d22d STEP: Creating a pod to test consume secrets Aug 17 12:01:16.097: INFO: Waiting up to 5m0s for pod "pod-secrets-4166d966-1332-4b64-bb34-c781418b736c" in namespace "secrets-968" to be "Succeeded or Failed" Aug 17 12:01:16.143: INFO: Pod "pod-secrets-4166d966-1332-4b64-bb34-c781418b736c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.938824ms Aug 17 12:01:18.147: INFO: Pod "pod-secrets-4166d966-1332-4b64-bb34-c781418b736c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050603862s Aug 17 12:01:20.339: INFO: Pod "pod-secrets-4166d966-1332-4b64-bb34-c781418b736c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242309958s Aug 17 12:01:22.344: INFO: Pod "pod-secrets-4166d966-1332-4b64-bb34-c781418b736c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.247671669s STEP: Saw pod success Aug 17 12:01:22.345: INFO: Pod "pod-secrets-4166d966-1332-4b64-bb34-c781418b736c" satisfied condition "Succeeded or Failed" Aug 17 12:01:22.348: INFO: Trying to get logs from node latest-worker pod pod-secrets-4166d966-1332-4b64-bb34-c781418b736c container secret-volume-test: STEP: delete the pod Aug 17 12:01:22.606: INFO: Waiting for pod pod-secrets-4166d966-1332-4b64-bb34-c781418b736c to disappear Aug 17 12:01:22.698: INFO: Pod pod-secrets-4166d966-1332-4b64-bb34-c781418b736c no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:01:22.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-968" for this suite. • [SLOW TEST:8.429 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":101,"skipped":1811,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:01:22.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:01:23.207: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 17 12:01:44.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1862 create -f -' Aug 17 12:01:51.167: INFO: stderr: "" Aug 17 12:01:51.167: INFO: stdout: "e2e-test-crd-publish-openapi-962-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 17 12:01:51.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1862 delete e2e-test-crd-publish-openapi-962-crds test-cr' Aug 17 12:01:52.855: INFO: stderr: "" Aug 17 12:01:52.855: INFO: stdout: "e2e-test-crd-publish-openapi-962-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 17 12:01:52.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1862 apply -f -' Aug 17 12:01:56.809: INFO: stderr: "" Aug 17 12:01:56.809: INFO: stdout: "e2e-test-crd-publish-openapi-962-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 17 12:01:56.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1862 delete e2e-test-crd-publish-openapi-962-crds test-cr' Aug 17 12:01:58.132: INFO: stderr: "" Aug 17 12:01:58.133: INFO: stdout: "e2e-test-crd-publish-openapi-962-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 17 12:01:58.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-962-crds' Aug 17 12:02:00.917: INFO: stderr: "" Aug 17 12:02:00.917: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-962-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:02:22.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1862" for this suite. • [SLOW TEST:60.073 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":102,"skipped":1841,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:02:22.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-lgk8 STEP: Creating a pod to test atomic-volume-subpath Aug 17 12:02:23.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lgk8" in namespace "subpath-4916" to be "Succeeded or Failed" Aug 17 12:02:23.276: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Pending", Reason="", readiness=false. Elapsed: 99.576965ms Aug 17 12:02:25.887: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711082681s Aug 17 12:02:27.940: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.763662049s Aug 17 12:02:29.947: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 6.77039556s Aug 17 12:02:31.952: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 8.77635446s Aug 17 12:02:33.960: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 10.783939443s Aug 17 12:02:35.967: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 12.7905816s Aug 17 12:02:37.974: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 14.797687428s Aug 17 12:02:40.107: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 16.931191691s Aug 17 12:02:42.114: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 18.937693479s Aug 17 12:02:44.174: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 20.99779372s Aug 17 12:02:46.216: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 23.04027458s Aug 17 12:02:48.233: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Running", Reason="", readiness=true. Elapsed: 25.056941191s Aug 17 12:02:50.379: INFO: Pod "pod-subpath-test-downwardapi-lgk8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.202540064s STEP: Saw pod success Aug 17 12:02:50.379: INFO: Pod "pod-subpath-test-downwardapi-lgk8" satisfied condition "Succeeded or Failed" Aug 17 12:02:50.733: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-lgk8 container test-container-subpath-downwardapi-lgk8: STEP: delete the pod Aug 17 12:02:51.493: INFO: Waiting for pod pod-subpath-test-downwardapi-lgk8 to disappear Aug 17 12:02:51.498: INFO: Pod pod-subpath-test-downwardapi-lgk8 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lgk8 Aug 17 12:02:51.498: INFO: Deleting pod "pod-subpath-test-downwardapi-lgk8" in namespace "subpath-4916" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:02:51.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4916" for this suite. • [SLOW TEST:28.725 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":103,"skipped":1861,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:02:51.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 17 12:02:51.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3362' Aug 17 12:02:53.967: INFO: stderr: "" Aug 17 12:02:53.967: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 17 12:02:53.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3362' Aug 17 12:02:55.520: INFO: stderr: "" Aug 17 12:02:55.520: INFO: stdout: "update-demo-nautilus-httqw update-demo-nautilus-s9bhp " Aug 17 12:02:55.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-httqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3362' Aug 17 12:02:57.109: INFO: stderr: "" Aug 17 12:02:57.109: INFO: stdout: "" Aug 17 12:02:57.110: INFO: update-demo-nautilus-httqw is created but not running Aug 17 12:03:02.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3362' Aug 17 12:03:03.486: INFO: stderr: "" Aug 17 12:03:03.486: INFO: stdout: "update-demo-nautilus-httqw update-demo-nautilus-s9bhp " Aug 17 12:03:03.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-httqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3362' Aug 17 12:03:04.838: INFO: stderr: "" Aug 17 12:03:04.838: INFO: stdout: "true" Aug 17 12:03:04.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-httqw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3362' Aug 17 12:03:06.239: INFO: stderr: "" Aug 17 12:03:06.239: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 17 12:03:06.240: INFO: validating pod update-demo-nautilus-httqw Aug 17 12:03:06.265: INFO: got data: { "image": "nautilus.jpg" } Aug 17 12:03:06.265: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 17 12:03:06.266: INFO: update-demo-nautilus-httqw is verified up and running Aug 17 12:03:06.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s9bhp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3362' Aug 17 12:03:07.801: INFO: stderr: "" Aug 17 12:03:07.801: INFO: stdout: "true" Aug 17 12:03:07.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s9bhp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3362' Aug 17 12:03:09.349: INFO: stderr: "" Aug 17 12:03:09.349: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 17 12:03:09.349: INFO: validating pod update-demo-nautilus-s9bhp Aug 17 12:03:09.366: INFO: got data: { "image": "nautilus.jpg" } Aug 17 12:03:09.366: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 17 12:03:09.366: INFO: update-demo-nautilus-s9bhp is verified up and running STEP: using delete to clean up resources Aug 17 12:03:09.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3362' Aug 17 12:03:11.007: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 12:03:11.007: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 17 12:03:11.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3362' Aug 17 12:03:12.617: INFO: stderr: "No resources found in kubectl-3362 namespace.\n" Aug 17 12:03:12.617: INFO: stdout: "" Aug 17 12:03:12.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3362 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 17 12:03:14.080: INFO: stderr: "" Aug 17 12:03:14.080: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:03:14.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3362" for this suite. • [SLOW TEST:22.574 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":104,"skipped":1880,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:03:14.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0817 12:03:22.513557 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 12:04:25.722: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:04:25.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8842" for this suite. • [SLOW TEST:71.648 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":105,"skipped":1895,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:04:25.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:04:38.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9178" for this suite. • [SLOW TEST:12.413 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":106,"skipped":1933,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} S ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:04:38.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2615 Aug 17 12:04:42.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2615 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 17 12:04:44.358: INFO: stderr: "I0817 12:04:44.240976 1519 log.go:181] (0x4000975130) (0x4000c86640) Create stream\nI0817 12:04:44.247022 1519 log.go:181] (0x4000975130) (0x4000c86640) Stream added, broadcasting: 1\nI0817 12:04:44.263745 1519 log.go:181] (0x4000975130) Reply frame received for 1\nI0817 12:04:44.265030 1519 log.go:181] (0x4000975130) (0x40009a88c0) Create stream\nI0817 12:04:44.265147 1519 log.go:181] (0x4000975130) (0x40009a88c0) Stream added, broadcasting: 3\nI0817 12:04:44.267754 1519 log.go:181] (0x4000975130) Reply frame received for 3\nI0817 12:04:44.268317 1519 log.go:181] (0x4000975130) (0x4000a1e140) Create stream\nI0817 12:04:44.268458 1519 log.go:181] (0x4000975130) (0x4000a1e140) Stream added, broadcasting: 5\nI0817 12:04:44.270108 1519 log.go:181] (0x4000975130) Reply frame received for 5\nI0817 12:04:44.338262 1519 log.go:181] (0x4000975130) Data frame received for 5\nI0817 12:04:44.338763 1519 log.go:181] (0x4000a1e140) (5) Data frame handling\nI0817 12:04:44.339743 1519 log.go:181] (0x4000975130) Data frame received for 3\nI0817 12:04:44.339882 1519 log.go:181] (0x40009a88c0) (3) Data frame handling\nI0817 12:04:44.339958 1519 log.go:181] (0x40009a88c0) (3) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0817 12:04:44.340445 1519 log.go:181] (0x4000a1e140) (5) Data frame sent\nI0817 12:04:44.340548 1519 log.go:181] (0x4000975130) Data frame received for 5\nI0817 12:04:44.340632 1519 log.go:181] (0x4000a1e140) (5) Data frame handling\nI0817 12:04:44.340903 1519 log.go:181] (0x4000975130) Data frame received for 3\nI0817 12:04:44.340988 1519 log.go:181] (0x40009a88c0) (3) Data frame handling\nI0817 12:04:44.343068 1519 log.go:181] (0x4000975130) Data frame received for 1\nI0817 12:04:44.343199 1519 log.go:181] (0x4000c86640) (1) Data frame handling\nI0817 12:04:44.343351 1519 log.go:181] (0x4000c86640) (1) Data frame sent\nI0817 12:04:44.344515 1519 log.go:181] (0x4000975130) (0x4000c86640) Stream removed, broadcasting: 1\nI0817 12:04:44.349732 1519 log.go:181] (0x4000975130) (0x4000c86640) Stream removed, broadcasting: 1\nI0817 12:04:44.350057 1519 log.go:181] (0x4000975130) (0x40009a88c0) Stream removed, broadcasting: 3\nI0817 12:04:44.350297 1519 log.go:181] (0x4000975130) (0x4000a1e140) Stream removed, broadcasting: 5\n" Aug 17 12:04:44.359: INFO: stdout: "iptables" Aug 17 12:04:44.359: INFO: proxyMode: iptables Aug 17 12:04:44.367: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 12:04:44.559: INFO: Pod kube-proxy-mode-detector still exists Aug 17 12:04:46.559: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 12:04:46.802: INFO: Pod kube-proxy-mode-detector still exists Aug 17 12:04:48.559: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 12:04:48.565: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-2615 STEP: creating replication controller affinity-clusterip-timeout in namespace services-2615 I0817 12:04:48.890290 10 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2615, replica count: 3 I0817 12:04:51.941669 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:04:54.942544 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:04:57.943270 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 12:04:58.840: INFO: Creating new exec pod Aug 17 12:05:09.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2615 execpod-affinityhvlqg -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Aug 17 12:05:11.546: INFO: stderr: "I0817 12:05:11.396926 1539 log.go:181] (0x40002cc0b0) (0x4000660320) Create stream\nI0817 12:05:11.399382 1539 log.go:181] (0x40002cc0b0) (0x4000660320) Stream added, broadcasting: 1\nI0817 12:05:11.412794 1539 log.go:181] (0x40002cc0b0) Reply frame received for 1\nI0817 12:05:11.414429 1539 log.go:181] (0x40002cc0b0) (0x40006a2000) Create stream\nI0817 12:05:11.414554 1539 log.go:181] (0x40002cc0b0) (0x40006a2000) Stream added, broadcasting: 3\nI0817 12:05:11.416464 1539 log.go:181] (0x40002cc0b0) Reply frame received for 3\nI0817 12:05:11.416837 1539 log.go:181] (0x40002cc0b0) (0x4000aba000) Create stream\nI0817 12:05:11.416917 1539 log.go:181] (0x40002cc0b0) (0x4000aba000) Stream added, broadcasting: 5\nI0817 12:05:11.418479 1539 log.go:181] (0x40002cc0b0) Reply frame received for 5\nI0817 12:05:11.491392 1539 log.go:181] (0x40002cc0b0) Data frame received for 5\nI0817 12:05:11.491639 1539 log.go:181] (0x4000aba000) (5) Data frame handling\nI0817 12:05:11.492064 1539 log.go:181] (0x4000aba000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0817 12:05:11.525999 1539 log.go:181] (0x40002cc0b0) Data frame received for 5\nI0817 12:05:11.526185 1539 log.go:181] (0x4000aba000) (5) Data frame handling\nI0817 12:05:11.526278 1539 log.go:181] (0x4000aba000) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0817 12:05:11.526364 1539 log.go:181] (0x40002cc0b0) Data frame received for 5\nI0817 12:05:11.526430 1539 log.go:181] (0x4000aba000) (5) Data frame handling\nI0817 12:05:11.526595 1539 log.go:181] (0x40002cc0b0) Data frame received for 3\nI0817 12:05:11.526686 1539 log.go:181] (0x40006a2000) (3) Data frame handling\nI0817 12:05:11.527829 1539 log.go:181] (0x40002cc0b0) Data frame received for 1\nI0817 12:05:11.527940 1539 log.go:181] (0x4000660320) (1) Data frame handling\nI0817 12:05:11.528045 1539 log.go:181] (0x4000660320) (1) Data frame sent\nI0817 12:05:11.529669 1539 log.go:181] (0x40002cc0b0) (0x4000660320) Stream removed, broadcasting: 1\nI0817 12:05:11.531440 1539 log.go:181] (0x40002cc0b0) Go away received\nI0817 12:05:11.535059 1539 log.go:181] (0x40002cc0b0) (0x4000660320) Stream removed, broadcasting: 1\nI0817 12:05:11.535419 1539 log.go:181] (0x40002cc0b0) (0x40006a2000) Stream removed, broadcasting: 3\nI0817 12:05:11.535679 1539 log.go:181] (0x40002cc0b0) (0x4000aba000) Stream removed, broadcasting: 5\n" Aug 17 12:05:11.547: INFO: stdout: "" Aug 17 12:05:11.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2615 execpod-affinityhvlqg -- /bin/sh -x -c nc -zv -t -w 2 10.107.122.187 80' Aug 17 12:05:13.121: INFO: stderr: "I0817 12:05:13.030942 1559 log.go:181] (0x40002eef20) (0x4000646460) Create stream\nI0817 12:05:13.033934 1559 log.go:181] (0x40002eef20) (0x4000646460) Stream added, broadcasting: 1\nI0817 12:05:13.043026 1559 log.go:181] (0x40002eef20) Reply frame received for 1\nI0817 12:05:13.043549 1559 log.go:181] (0x40002eef20) (0x4000622000) Create stream\nI0817 12:05:13.043606 1559 log.go:181] (0x40002eef20) (0x4000622000) Stream added, broadcasting: 3\nI0817 12:05:13.044830 1559 log.go:181] (0x40002eef20) Reply frame received for 3\nI0817 12:05:13.045041 1559 log.go:181] (0x40002eef20) (0x4000646500) Create stream\nI0817 12:05:13.045089 1559 log.go:181] (0x40002eef20) (0x4000646500) Stream added, broadcasting: 5\nI0817 12:05:13.046076 1559 log.go:181] (0x40002eef20) Reply frame received for 5\nI0817 12:05:13.102196 1559 log.go:181] (0x40002eef20) Data frame received for 5\nI0817 12:05:13.102396 1559 log.go:181] (0x40002eef20) Data frame received for 1\nI0817 12:05:13.102645 1559 log.go:181] (0x40002eef20) Data frame received for 3\nI0817 12:05:13.102824 1559 log.go:181] (0x4000622000) (3) Data frame handling\nI0817 12:05:13.102948 1559 log.go:181] (0x4000646500) (5) Data frame handling\nI0817 12:05:13.103171 1559 log.go:181] (0x4000646460) (1) Data frame handling\nI0817 12:05:13.104422 1559 log.go:181] (0x4000646500) (5) Data frame sent\nI0817 12:05:13.104518 1559 log.go:181] (0x4000646460) (1) Data frame sent\nI0817 12:05:13.106204 1559 log.go:181] (0x40002eef20) Data frame received for 5\n+ nc -zv -t -w 2 10.107.122.187 80\nConnection to 10.107.122.187 80 port [tcp/http] succeeded!\nI0817 12:05:13.106513 1559 log.go:181] (0x40002eef20) (0x4000646460) Stream removed, broadcasting: 1\nI0817 12:05:13.107282 1559 log.go:181] (0x4000646500) (5) Data frame handling\nI0817 12:05:13.107520 1559 log.go:181] (0x40002eef20) Go away received\nI0817 12:05:13.110842 1559 log.go:181] (0x40002eef20) (0x4000646460) Stream removed, broadcasting: 1\nI0817 12:05:13.111125 1559 log.go:181] (0x40002eef20) (0x4000622000) Stream removed, broadcasting: 3\nI0817 12:05:13.111339 1559 log.go:181] (0x40002eef20) (0x4000646500) Stream removed, broadcasting: 5\n" Aug 17 12:05:13.122: INFO: stdout: "" Aug 17 12:05:13.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2615 execpod-affinityhvlqg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.122.187:80/ ; done' Aug 17 12:05:14.775: INFO: stderr: "I0817 12:05:14.579885 1579 log.go:181] (0x4000633080) (0x4000b82460) Create stream\nI0817 12:05:14.585130 1579 log.go:181] (0x4000633080) (0x4000b82460) Stream added, broadcasting: 1\nI0817 12:05:14.600155 1579 log.go:181] (0x4000633080) Reply frame received for 1\nI0817 12:05:14.601564 1579 log.go:181] (0x4000633080) (0x4000c88000) Create stream\nI0817 12:05:14.601701 1579 log.go:181] (0x4000633080) (0x4000c88000) Stream added, broadcasting: 3\nI0817 12:05:14.603769 1579 log.go:181] (0x4000633080) Reply frame received for 3\nI0817 12:05:14.604209 1579 log.go:181] (0x4000633080) (0x4000026320) Create stream\nI0817 12:05:14.604305 1579 log.go:181] (0x4000633080) (0x4000026320) Stream added, broadcasting: 5\nI0817 12:05:14.605772 1579 log.go:181] (0x4000633080) Reply frame received for 5\nI0817 12:05:14.671559 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.671896 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.671994 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.672107 1579 log.go:181] (0x4000026320) (5) Data frame handling\nI0817 12:05:14.672610 1579 log.go:181] (0x4000c88000) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.673186 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.673980 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.674072 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.674174 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.674596 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.674684 1579 log.go:181] (0x4000026320) (5) Data frame handling\nI0817 12:05:14.674754 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.674820 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.675035 1579 log.go:181] (0x4000c88000) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.675114 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.679583 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.679681 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.679796 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.680080 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.680184 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.680285 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.680392 1579 log.go:181] (0x4000026320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.680494 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.680593 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.685343 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.685420 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.685505 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.686125 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.686202 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.686298 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.686364 1579 log.go:181] (0x4000026320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.686481 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.686605 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.690425 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.690544 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.690696 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.691339 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.691466 1579 log.go:181] (0x4000026320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.691576 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.691702 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.691809 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.691928 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.695658 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.695817 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.695970 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.696660 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.696820 1579 log.go:181] (0x4000026320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.696929 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.697083 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.697223 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.697360 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.701559 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.701703 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.701891 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.702330 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.702469 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.702575 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.702717 1579 log.go:181] (0x4000026320) (5) Data frame handling\nI0817 12:05:14.702875 1579 log.go:181] (0x4000026320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.703004 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.707699 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.707850 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.708020 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.708942 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.709052 1579 log.go:181] (0x4000026320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/I0817 12:05:14.709159 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.709271 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.709349 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.709420 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.709492 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.709557 1579 log.go:181] (0x4000026320) (5) Data frame handling\nI0817 12:05:14.709675 1579 log.go:181] (0x4000026320) (5) Data frame sent\n\nI0817 12:05:14.713972 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.714097 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.714180 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.714255 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.714349 1579 log.go:181] (0x4000026320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.714443 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.714584 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.714662 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.714761 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.717790 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.717879 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.718043 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.718369 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.718512 1579 log.go:181] (0x4000026320) (5) Data frame handling\nI0817 12:05:14.718634 1579 log.go:181] (0x4000026320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0817 12:05:14.718737 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.718851 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.719002 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.719083 1579 log.go:181] (0x4000026320) (5) Data frame handling\n http://10.107.122.187:80/\nI0817 12:05:14.719194 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.719305 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.722222 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.722351 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.722487 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.722741 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.722888 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.723005 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.723148 1579 log.go:181] (0x4000026320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.723239 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.723339 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.727147 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.727254 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.727399 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.727592 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.727724 1579 log.go:181] (0x4000026320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.727832 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.727922 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.727998 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.728063 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.732602 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.732715 1579 log.go:181] (0x4000026320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.732943 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.733103 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.733248 1579 log.go:181] (0x4000026320) (5) Data frame sent\nI0817 12:05:14.733427 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.733556 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.733662 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.733768 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.738160 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.738228 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.738304 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.738879 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.738986 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.739087 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.739198 1579 log.go:181] (0x4000026320) (5) Data frame handling\nI0817 12:05:14.739309 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.739420 1579 log.go:181] (0x4000026320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.744029 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.744099 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.744198 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.744502 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.744584 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.744647 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.744709 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.744902 1579 log.go:181] (0x4000026320) (5) Data frame handling\nI0817 12:05:14.744974 1579 log.go:181] (0x4000026320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.750717 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.750792 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.750875 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.751617 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.751719 1579 log.go:181] (0x4000026320) (5) Data frame handling\nI0817 12:05:14.751804 1579 log.go:181] (0x4000026320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:14.751892 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.751959 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.752039 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.757979 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.758053 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.758208 1579 log.go:181] (0x4000633080) Data frame received for 5\nI0817 12:05:14.758376 1579 log.go:181] (0x4000026320) (5) Data frame handling\nI0817 12:05:14.758457 1579 log.go:181] (0x4000c88000) (3) Data frame sent\nI0817 12:05:14.758549 1579 log.go:181] (0x4000633080) Data frame received for 3\nI0817 12:05:14.758602 1579 log.go:181] (0x4000c88000) (3) Data frame handling\nI0817 12:05:14.760367 1579 log.go:181] (0x4000633080) Data frame received for 1\nI0817 12:05:14.760435 1579 log.go:181] (0x4000b82460) (1) Data frame handling\nI0817 12:05:14.760506 1579 log.go:181] (0x4000b82460) (1) Data frame sent\nI0817 12:05:14.761436 1579 log.go:181] (0x4000633080) (0x4000b82460) Stream removed, broadcasting: 1\nI0817 12:05:14.764449 1579 log.go:181] (0x4000633080) Go away received\nI0817 12:05:14.766975 1579 log.go:181] (0x4000633080) (0x4000b82460) Stream removed, broadcasting: 1\nI0817 12:05:14.767193 1579 log.go:181] (0x4000633080) (0x4000c88000) Stream removed, broadcasting: 3\nI0817 12:05:14.767344 1579 log.go:181] (0x4000633080) (0x4000026320) Stream removed, broadcasting: 5\n" Aug 17 12:05:14.780: INFO: stdout: "\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5\naffinity-clusterip-timeout-6mvz5" Aug 17 12:05:14.780: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Received response from host: affinity-clusterip-timeout-6mvz5 Aug 17 12:05:14.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2615 execpod-affinityhvlqg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.122.187:80/' Aug 17 12:05:16.365: INFO: stderr: "I0817 12:05:16.242277 1599 log.go:181] (0x40003082c0) (0x4000f1e1e0) Create stream\nI0817 12:05:16.245559 1599 log.go:181] (0x40003082c0) (0x4000f1e1e0) Stream added, broadcasting: 1\nI0817 12:05:16.258493 1599 log.go:181] (0x40003082c0) Reply frame received for 1\nI0817 12:05:16.259171 1599 log.go:181] (0x40003082c0) (0x4000f1e280) Create stream\nI0817 12:05:16.259235 1599 log.go:181] (0x40003082c0) (0x4000f1e280) Stream added, broadcasting: 3\nI0817 12:05:16.260528 1599 log.go:181] (0x40003082c0) Reply frame received for 3\nI0817 12:05:16.260967 1599 log.go:181] (0x40003082c0) (0x4000658140) Create stream\nI0817 12:05:16.261072 1599 log.go:181] (0x40003082c0) (0x4000658140) Stream added, broadcasting: 5\nI0817 12:05:16.262427 1599 log.go:181] (0x40003082c0) Reply frame received for 5\nI0817 12:05:16.328844 1599 log.go:181] (0x40003082c0) Data frame received for 5\nI0817 12:05:16.329250 1599 log.go:181] (0x4000658140) (5) Data frame handling\nI0817 12:05:16.330944 1599 log.go:181] (0x40003082c0) Data frame received for 3\nI0817 12:05:16.331092 1599 log.go:181] (0x4000f1e280) (3) Data frame handling\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:16.331632 1599 log.go:181] (0x4000f1e280) (3) Data frame sent\nI0817 12:05:16.332032 1599 log.go:181] (0x4000658140) (5) Data frame sent\nI0817 12:05:16.332171 1599 log.go:181] (0x40003082c0) Data frame received for 5\nI0817 12:05:16.332257 1599 log.go:181] (0x4000658140) (5) Data frame handling\nI0817 12:05:16.332380 1599 log.go:181] (0x40003082c0) Data frame received for 3\nI0817 12:05:16.332470 1599 log.go:181] (0x4000f1e280) (3) Data frame handling\nI0817 12:05:16.332822 1599 log.go:181] (0x40003082c0) Data frame received for 1\nI0817 12:05:16.332963 1599 log.go:181] (0x4000f1e1e0) (1) Data frame handling\nI0817 12:05:16.333072 1599 log.go:181] (0x4000f1e1e0) (1) Data frame sent\nI0817 12:05:16.334861 1599 log.go:181] (0x40003082c0) (0x4000f1e1e0) Stream removed, broadcasting: 1\nI0817 12:05:16.336901 1599 log.go:181] (0x40003082c0) Go away received\nI0817 12:05:16.356362 1599 log.go:181] (0x40003082c0) (0x4000f1e1e0) Stream removed, broadcasting: 1\nI0817 12:05:16.356700 1599 log.go:181] (0x40003082c0) (0x4000f1e280) Stream removed, broadcasting: 3\nI0817 12:05:16.357068 1599 log.go:181] (0x40003082c0) (0x4000658140) Stream removed, broadcasting: 5\n" Aug 17 12:05:16.366: INFO: stdout: "affinity-clusterip-timeout-6mvz5" Aug 17 12:05:31.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2615 execpod-affinityhvlqg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.122.187:80/' Aug 17 12:05:32.991: INFO: stderr: "I0817 12:05:32.868449 1619 log.go:181] (0x400062a160) (0x4000c35c20) Create stream\nI0817 12:05:32.873587 1619 log.go:181] (0x400062a160) (0x4000c35c20) Stream added, broadcasting: 1\nI0817 12:05:32.885919 1619 log.go:181] (0x400062a160) Reply frame received for 1\nI0817 12:05:32.886504 1619 log.go:181] (0x400062a160) (0x40004380a0) Create stream\nI0817 12:05:32.886568 1619 log.go:181] (0x400062a160) (0x40004380a0) Stream added, broadcasting: 3\nI0817 12:05:32.887959 1619 log.go:181] (0x400062a160) Reply frame received for 3\nI0817 12:05:32.888597 1619 log.go:181] (0x400062a160) (0x4000c35cc0) Create stream\nI0817 12:05:32.888675 1619 log.go:181] (0x400062a160) (0x4000c35cc0) Stream added, broadcasting: 5\nI0817 12:05:32.890187 1619 log.go:181] (0x400062a160) Reply frame received for 5\nI0817 12:05:32.965777 1619 log.go:181] (0x400062a160) Data frame received for 5\nI0817 12:05:32.965969 1619 log.go:181] (0x4000c35cc0) (5) Data frame handling\nI0817 12:05:32.966317 1619 log.go:181] (0x4000c35cc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.122.187:80/\nI0817 12:05:32.970598 1619 log.go:181] (0x400062a160) Data frame received for 3\nI0817 12:05:32.970851 1619 log.go:181] (0x40004380a0) (3) Data frame handling\nI0817 12:05:32.971055 1619 log.go:181] (0x40004380a0) (3) Data frame sent\nI0817 12:05:32.971219 1619 log.go:181] (0x400062a160) Data frame received for 5\nI0817 12:05:32.971461 1619 log.go:181] (0x4000c35cc0) (5) Data frame handling\nI0817 12:05:32.971756 1619 log.go:181] (0x400062a160) Data frame received for 3\nI0817 12:05:32.971943 1619 log.go:181] (0x40004380a0) (3) Data frame handling\nI0817 12:05:32.973125 1619 log.go:181] (0x400062a160) Data frame received for 1\nI0817 12:05:32.973237 1619 log.go:181] (0x4000c35c20) (1) Data frame handling\nI0817 12:05:32.973337 1619 log.go:181] (0x4000c35c20) (1) Data frame sent\nI0817 12:05:32.974470 1619 log.go:181] (0x400062a160) (0x4000c35c20) Stream removed, broadcasting: 1\nI0817 12:05:32.978131 1619 log.go:181] (0x400062a160) Go away received\nI0817 12:05:32.982245 1619 log.go:181] (0x400062a160) (0x4000c35c20) Stream removed, broadcasting: 1\nI0817 12:05:32.982548 1619 log.go:181] (0x400062a160) (0x40004380a0) Stream removed, broadcasting: 3\nI0817 12:05:32.982749 1619 log.go:181] (0x400062a160) (0x4000c35cc0) Stream removed, broadcasting: 5\n" Aug 17 12:05:32.992: INFO: stdout: "affinity-clusterip-timeout-b7s5n" Aug 17 12:05:32.992: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2615, will wait for the garbage collector to delete the pods Aug 17 12:05:33.151: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 8.803255ms Aug 17 12:05:33.652: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.934856ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:05:50.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2615" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:72.060 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":107,"skipped":1934,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:05:50.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 17 12:05:50.306: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:07:59.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6299" for this suite. • [SLOW TEST:128.968 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":108,"skipped":1934,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:07:59.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:08:00.007: INFO: Waiting up to 5m0s for pod "busybox-user-65534-22271c1d-51a5-4adc-b320-ea30e4acb1d0" in namespace "security-context-test-7925" to be "Succeeded or Failed" Aug 17 12:08:00.100: INFO: Pod "busybox-user-65534-22271c1d-51a5-4adc-b320-ea30e4acb1d0": Phase="Pending", Reason="", readiness=false. Elapsed: 92.84193ms Aug 17 12:08:02.845: INFO: Pod "busybox-user-65534-22271c1d-51a5-4adc-b320-ea30e4acb1d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.838023044s Aug 17 12:08:05.119: INFO: Pod "busybox-user-65534-22271c1d-51a5-4adc-b320-ea30e4acb1d0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.11206131s Aug 17 12:08:07.130: INFO: Pod "busybox-user-65534-22271c1d-51a5-4adc-b320-ea30e4acb1d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.12231008s Aug 17 12:08:07.130: INFO: Pod "busybox-user-65534-22271c1d-51a5-4adc-b320-ea30e4acb1d0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:08:07.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7925" for this suite. • [SLOW TEST:7.955 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":109,"skipped":1966,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:08:07.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:08:07.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 17 12:08:07.952: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T12:08:07Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T12:08:07Z]] name:name1 resourceVersion:717533 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:13e850f3-bcc5-42f6-91f2-8d723a7a6a09] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 17 12:08:18.145: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T12:08:17Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T12:08:17Z]] name:name2 resourceVersion:717572 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:fcd5fe59-f616-4855-a280-66d3c5e4e0bf] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 17 12:08:28.282: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T12:08:07Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T12:08:28Z]] name:name1 resourceVersion:717600 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:13e850f3-bcc5-42f6-91f2-8d723a7a6a09] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 17 12:08:38.293: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T12:08:17Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T12:08:38Z]] name:name2 resourceVersion:717629 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:fcd5fe59-f616-4855-a280-66d3c5e4e0bf] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 17 12:08:48.476: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T12:08:07Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T12:08:28Z]] name:name1 resourceVersion:717658 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:13e850f3-bcc5-42f6-91f2-8d723a7a6a09] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 17 12:08:58.851: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T12:08:17Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-17T12:08:38Z]] name:name2 resourceVersion:717689 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:fcd5fe59-f616-4855-a280-66d3c5e4e0bf] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:09:09.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8776" for this suite. • [SLOW TEST:62.240 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":110,"skipped":1995,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:09:09.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Aug 17 12:09:09.644: INFO: Major version: 1 STEP: Confirm minor version Aug 17 12:09:09.644: INFO: cleanMinorVersion: 19 Aug 17 12:09:09.644: INFO: Minor version: 19+ [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:09:09.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-1931" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":111,"skipped":2002,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:09:09.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:09:21.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6251" for this suite. • [SLOW TEST:11.595 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":112,"skipped":2032,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:09:21.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:09:21.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2921" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":113,"skipped":2062,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:09:21.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 12:09:24.657: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 12:09:27.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262964, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262964, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262964, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262964, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:09:29.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262964, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262964, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262964, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733262964, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:09:32.454: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:09:32.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5606" for this suite. STEP: Destroying namespace "webhook-5606-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.531 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":114,"skipped":2069,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:09:33.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-faaa62c3-00d0-4ec4-8e1a-8d3d5ca02d64 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-faaa62c3-00d0-4ec4-8e1a-8d3d5ca02d64 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:10:45.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2150" for this suite. • [SLOW TEST:72.444 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":2082,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:10:45.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:10:49.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7656" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":116,"skipped":2094,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:10:50.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1583 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 17 12:10:50.224: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 17 12:10:51.008: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:10:53.556: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:10:55.218: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:10:57.046: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:10:59.104: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:11:01.583: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:11:03.014: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:11:05.164: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:11:07.014: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:11:09.164: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:11:11.016: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:11:13.014: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:11:15.015: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:11:17.200: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:11:19.044: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 17 12:11:19.060: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 17 12:11:23.243: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.246:8080/dial?request=hostname&protocol=http&host=10.244.2.27&port=8080&tries=1'] Namespace:pod-network-test-1583 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 12:11:23.243: INFO: >>> kubeConfig: /root/.kube/config I0817 12:11:23.304711 10 log.go:181] (0x400015fc30) (0x40006a7720) Create stream I0817 12:11:23.304942 10 log.go:181] (0x400015fc30) (0x40006a7720) Stream added, broadcasting: 1 I0817 12:11:23.308117 10 log.go:181] (0x400015fc30) Reply frame received for 1 I0817 12:11:23.308280 10 log.go:181] (0x400015fc30) (0x40024bebe0) Create stream I0817 12:11:23.308350 10 log.go:181] (0x400015fc30) (0x40024bebe0) Stream added, broadcasting: 3 I0817 12:11:23.309756 10 log.go:181] (0x400015fc30) Reply frame received for 3 I0817 12:11:23.309912 10 log.go:181] (0x400015fc30) (0x40024bec80) Create stream I0817 12:11:23.309989 10 log.go:181] (0x400015fc30) (0x40024bec80) Stream added, broadcasting: 5 I0817 12:11:23.311295 10 log.go:181] (0x400015fc30) Reply frame received for 5 I0817 12:11:23.375170 10 log.go:181] (0x400015fc30) Data frame received for 3 I0817 12:11:23.375301 10 log.go:181] (0x40024bebe0) (3) Data frame handling I0817 12:11:23.375430 10 log.go:181] (0x40024bebe0) (3) Data frame sent I0817 12:11:23.375752 10 log.go:181] (0x400015fc30) Data frame received for 5 I0817 12:11:23.375911 10 log.go:181] (0x40024bec80) (5) Data frame handling I0817 12:11:23.376092 10 log.go:181] (0x400015fc30) Data frame received for 3 I0817 12:11:23.376263 10 log.go:181] (0x40024bebe0) (3) Data frame handling I0817 12:11:23.377621 10 log.go:181] (0x400015fc30) Data frame received for 1 I0817 12:11:23.377741 10 log.go:181] (0x40006a7720) (1) Data frame handling I0817 12:11:23.377836 10 log.go:181] (0x40006a7720) (1) Data frame sent I0817 12:11:23.377925 10 log.go:181] (0x400015fc30) (0x40006a7720) Stream removed, broadcasting: 1 I0817 12:11:23.378054 10 log.go:181] (0x400015fc30) Go away received I0817 12:11:23.378236 10 log.go:181] (0x400015fc30) (0x40006a7720) Stream removed, broadcasting: 1 I0817 12:11:23.378336 10 log.go:181] (0x400015fc30) (0x40024bebe0) Stream removed, broadcasting: 3 I0817 12:11:23.378419 10 log.go:181] (0x400015fc30) (0x40024bec80) Stream removed, broadcasting: 5 Aug 17 12:11:23.379: INFO: Waiting for responses: map[] Aug 17 12:11:23.384: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.246:8080/dial?request=hostname&protocol=http&host=10.244.1.245&port=8080&tries=1'] Namespace:pod-network-test-1583 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 12:11:23.384: INFO: >>> kubeConfig: /root/.kube/config I0817 12:11:23.444287 10 log.go:181] (0x4000e17970) (0x40024bf180) Create stream I0817 12:11:23.444447 10 log.go:181] (0x4000e17970) (0x40024bf180) Stream added, broadcasting: 1 I0817 12:11:23.449215 10 log.go:181] (0x4000e17970) Reply frame received for 1 I0817 12:11:23.449459 10 log.go:181] (0x4000e17970) (0x40017921e0) Create stream I0817 12:11:23.449636 10 log.go:181] (0x4000e17970) (0x40017921e0) Stream added, broadcasting: 3 I0817 12:11:23.451518 10 log.go:181] (0x4000e17970) Reply frame received for 3 I0817 12:11:23.451663 10 log.go:181] (0x4000e17970) (0x40024bf220) Create stream I0817 12:11:23.451749 10 log.go:181] (0x4000e17970) (0x40024bf220) Stream added, broadcasting: 5 I0817 12:11:23.453418 10 log.go:181] (0x4000e17970) Reply frame received for 5 I0817 12:11:23.531688 10 log.go:181] (0x4000e17970) Data frame received for 3 I0817 12:11:23.531833 10 log.go:181] (0x40017921e0) (3) Data frame handling I0817 12:11:23.531952 10 log.go:181] (0x4000e17970) Data frame received for 5 I0817 12:11:23.532052 10 log.go:181] (0x40024bf220) (5) Data frame handling I0817 12:11:23.532140 10 log.go:181] (0x4000e17970) Data frame received for 1 I0817 12:11:23.532236 10 log.go:181] (0x40024bf180) (1) Data frame handling I0817 12:11:23.532311 10 log.go:181] (0x40017921e0) (3) Data frame sent I0817 12:11:23.532401 10 log.go:181] (0x4000e17970) Data frame received for 3 I0817 12:11:23.532476 10 log.go:181] (0x40024bf180) (1) Data frame sent I0817 12:11:23.532592 10 log.go:181] (0x4000e17970) (0x40024bf180) Stream removed, broadcasting: 1 I0817 12:11:23.532671 10 log.go:181] (0x40017921e0) (3) Data frame handling I0817 12:11:23.532871 10 log.go:181] (0x4000e17970) Go away received I0817 12:11:23.533382 10 log.go:181] (0x4000e17970) (0x40024bf180) Stream removed, broadcasting: 1 I0817 12:11:23.533473 10 log.go:181] (0x4000e17970) (0x40017921e0) Stream removed, broadcasting: 3 I0817 12:11:23.533586 10 log.go:181] (0x4000e17970) (0x40024bf220) Stream removed, broadcasting: 5 Aug 17 12:11:23.533: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:11:23.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1583" for this suite. • [SLOW TEST:33.535 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":117,"skipped":2103,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:11:23.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:11:35.018: INFO: Waiting up to 5m0s for pod "client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821" in namespace "pods-3654" to be "Succeeded or Failed" Aug 17 12:11:35.505: INFO: Pod "client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821": Phase="Pending", Reason="", readiness=false. Elapsed: 486.231017ms Aug 17 12:11:37.512: INFO: Pod "client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493238197s Aug 17 12:11:39.762: INFO: Pod "client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821": Phase="Pending", Reason="", readiness=false. Elapsed: 4.743547964s Aug 17 12:11:41.767: INFO: Pod "client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821": Phase="Pending", Reason="", readiness=false. Elapsed: 6.748235165s Aug 17 12:11:43.924: INFO: Pod "client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.905609046s STEP: Saw pod success Aug 17 12:11:43.924: INFO: Pod "client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821" satisfied condition "Succeeded or Failed" Aug 17 12:11:43.929: INFO: Trying to get logs from node latest-worker2 pod client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821 container env3cont: STEP: delete the pod Aug 17 12:11:44.551: INFO: Waiting for pod client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821 to disappear Aug 17 12:11:44.865: INFO: Pod client-envvars-add1816d-85e0-4416-bc92-c65bac0fc821 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:11:44.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3654" for this suite. • [SLOW TEST:21.337 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":2103,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:11:44.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:11:46.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config version' Aug 17 12:11:48.353: INFO: stderr: "" Aug 17 12:11:48.353: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.4\", GitCommit:\"1afc53514032a44d091ae4a9f6e092171db9fe10\", GitTreeState:\"clean\", BuildDate:\"2020-08-04T14:29:10Z\", GoVersion:\"go1.15rc1\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.1\", GitCommit:\"2cbdfecbbd57dbd4e9f42d73a75fbbc6d9eadfd3\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:33:31Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:11:48.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-687" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":119,"skipped":2108,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:11:49.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:11:52.502: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-40a594f8-5048-49f9-b133-9f6d7da0ef81" in namespace "security-context-test-468" to be "Succeeded or Failed" Aug 17 12:11:52.691: INFO: Pod "busybox-privileged-false-40a594f8-5048-49f9-b133-9f6d7da0ef81": Phase="Pending", Reason="", readiness=false. Elapsed: 188.77412ms Aug 17 12:11:54.721: INFO: Pod "busybox-privileged-false-40a594f8-5048-49f9-b133-9f6d7da0ef81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218278033s Aug 17 12:11:58.573: INFO: Pod "busybox-privileged-false-40a594f8-5048-49f9-b133-9f6d7da0ef81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070194043s Aug 17 12:12:00.661: INFO: Pod "busybox-privileged-false-40a594f8-5048-49f9-b133-9f6d7da0ef81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159096252s Aug 17 12:12:02.667: INFO: Pod "busybox-privileged-false-40a594f8-5048-49f9-b133-9f6d7da0ef81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.164992731s Aug 17 12:12:02.668: INFO: Pod "busybox-privileged-false-40a594f8-5048-49f9-b133-9f6d7da0ef81" satisfied condition "Succeeded or Failed" Aug 17 12:12:02.750: INFO: Got logs for pod "busybox-privileged-false-40a594f8-5048-49f9-b133-9f6d7da0ef81": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:12:02.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-468" for this suite. • [SLOW TEST:13.537 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":2142,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:12:02.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:12:02.962: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:12:03.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1230" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":121,"skipped":2144,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:12:04.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:12:13.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9083" for this suite. STEP: Destroying namespace "nsdeletetest-9158" for this suite. Aug 17 12:12:13.622: INFO: Namespace nsdeletetest-9158 was already deleted STEP: Destroying namespace "nsdeletetest-2513" for this suite. • [SLOW TEST:9.447 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":122,"skipped":2162,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:12:13.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8570 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-8570 Aug 17 12:12:14.522: INFO: Found 0 stateful pods, waiting for 1 Aug 17 12:12:24.607: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 17 12:12:24.648: INFO: Deleting all statefulset in ns statefulset-8570 Aug 17 12:12:24.896: INFO: Scaling statefulset ss to 0 Aug 17 12:12:35.391: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 12:12:35.396: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:12:35.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8570" for this suite. • [SLOW TEST:21.842 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":123,"skipped":2174,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:12:35.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3589, will wait for the garbage collector to delete the pods Aug 17 12:12:41.793: INFO: Deleting Job.batch foo took: 9.137815ms Aug 17 12:12:41.894: INFO: Terminating Job.batch foo pods took: 100.795913ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:13:19.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3589" for this suite. • [SLOW TEST:44.387 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":124,"skipped":2180,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:13:19.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 17 12:13:19.937: INFO: Waiting up to 5m0s for pod "downward-api-c94d2b55-5ab1-4df0-9ca2-f04544148a3e" in namespace "downward-api-1132" to be "Succeeded or Failed" Aug 17 12:13:19.942: INFO: Pod "downward-api-c94d2b55-5ab1-4df0-9ca2-f04544148a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.215513ms Aug 17 12:13:21.950: INFO: Pod "downward-api-c94d2b55-5ab1-4df0-9ca2-f04544148a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013122215s Aug 17 12:13:23.958: INFO: Pod "downward-api-c94d2b55-5ab1-4df0-9ca2-f04544148a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021384273s Aug 17 12:13:26.141: INFO: Pod "downward-api-c94d2b55-5ab1-4df0-9ca2-f04544148a3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.204423344s STEP: Saw pod success Aug 17 12:13:26.141: INFO: Pod "downward-api-c94d2b55-5ab1-4df0-9ca2-f04544148a3e" satisfied condition "Succeeded or Failed" Aug 17 12:13:26.595: INFO: Trying to get logs from node latest-worker pod downward-api-c94d2b55-5ab1-4df0-9ca2-f04544148a3e container dapi-container: STEP: delete the pod Aug 17 12:13:27.116: INFO: Waiting for pod downward-api-c94d2b55-5ab1-4df0-9ca2-f04544148a3e to disappear Aug 17 12:13:27.158: INFO: Pod downward-api-c94d2b55-5ab1-4df0-9ca2-f04544148a3e no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:13:27.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1132" for this suite. • [SLOW TEST:7.362 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":2201,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:13:27.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:13:27.470: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 17 12:13:27.539: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:27.543: INFO: Number of nodes with available pods: 0 Aug 17 12:13:27.543: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:28.555: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:28.561: INFO: Number of nodes with available pods: 0 Aug 17 12:13:28.561: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:30.034: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:30.121: INFO: Number of nodes with available pods: 0 Aug 17 12:13:30.121: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:30.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:30.562: INFO: Number of nodes with available pods: 0 Aug 17 12:13:30.562: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:31.644: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:31.673: INFO: Number of nodes with available pods: 0 Aug 17 12:13:31.673: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:32.592: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:32.598: INFO: Number of nodes with available pods: 1 Aug 17 12:13:32.598: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:33.553: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:33.559: INFO: Number of nodes with available pods: 2 Aug 17 12:13:33.559: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 17 12:13:34.318: INFO: Wrong image for pod: daemon-set-5xnn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:34.318: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:34.723: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:35.730: INFO: Wrong image for pod: daemon-set-5xnn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:35.730: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:35.739: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:36.779: INFO: Wrong image for pod: daemon-set-5xnn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:36.779: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:36.871: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:37.731: INFO: Wrong image for pod: daemon-set-5xnn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:37.731: INFO: Pod daemon-set-5xnn4 is not available Aug 17 12:13:37.731: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:37.741: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:38.732: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:38.732: INFO: Pod daemon-set-xmxhj is not available Aug 17 12:13:38.763: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:39.790: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:39.790: INFO: Pod daemon-set-xmxhj is not available Aug 17 12:13:39.800: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:40.922: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:40.922: INFO: Pod daemon-set-xmxhj is not available Aug 17 12:13:40.933: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:41.731: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:41.731: INFO: Pod daemon-set-xmxhj is not available Aug 17 12:13:41.740: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:42.919: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:43.012: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:43.733: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:43.739: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:44.732: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:44.733: INFO: Pod daemon-set-wxbjf is not available Aug 17 12:13:44.743: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:45.785: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:45.785: INFO: Pod daemon-set-wxbjf is not available Aug 17 12:13:45.815: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:46.759: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:46.759: INFO: Pod daemon-set-wxbjf is not available Aug 17 12:13:46.769: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:48.287: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:48.288: INFO: Pod daemon-set-wxbjf is not available Aug 17 12:13:48.370: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:48.796: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:48.796: INFO: Pod daemon-set-wxbjf is not available Aug 17 12:13:48.804: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:49.742: INFO: Wrong image for pod: daemon-set-wxbjf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 17 12:13:49.742: INFO: Pod daemon-set-wxbjf is not available Aug 17 12:13:49.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:50.733: INFO: Pod daemon-set-bkvgh is not available Aug 17 12:13:50.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 17 12:13:50.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:50.760: INFO: Number of nodes with available pods: 1 Aug 17 12:13:50.760: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:51.836: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:51.843: INFO: Number of nodes with available pods: 1 Aug 17 12:13:51.843: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:52.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:52.780: INFO: Number of nodes with available pods: 1 Aug 17 12:13:52.781: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:53.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:53.780: INFO: Number of nodes with available pods: 1 Aug 17 12:13:53.781: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:54.782: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:54.789: INFO: Number of nodes with available pods: 1 Aug 17 12:13:54.789: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:13:55.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:13:55.779: INFO: Number of nodes with available pods: 2 Aug 17 12:13:55.779: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9700, will wait for the garbage collector to delete the pods Aug 17 12:13:55.891: INFO: Deleting DaemonSet.extensions daemon-set took: 29.075814ms Aug 17 12:13:56.391: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.728313ms Aug 17 12:14:10.344: INFO: Number of nodes with available pods: 0 Aug 17 12:14:10.344: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 12:14:10.348: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9700/daemonsets","resourceVersion":"719140"},"items":null} Aug 17 12:14:10.352: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9700/pods","resourceVersion":"719140"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:14:10.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9700" for this suite. • [SLOW TEST:43.159 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":126,"skipped":2207,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:14:10.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:14:34.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5800" for this suite. • [SLOW TEST:24.364 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":127,"skipped":2235,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:14:34.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 17 12:14:34.930: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:16:13.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9199" for this suite. • [SLOW TEST:98.918 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":128,"skipped":2243,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:16:13.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0817 12:16:29.237633 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 12:17:32.312: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 17 12:17:32.313: INFO: Deleting pod "simpletest-rc-to-be-deleted-6v5hp" in namespace "gc-9504" Aug 17 12:17:32.781: INFO: Deleting pod "simpletest-rc-to-be-deleted-9fhl6" in namespace "gc-9504" Aug 17 12:17:33.493: INFO: Deleting pod "simpletest-rc-to-be-deleted-k2jhp" in namespace "gc-9504" Aug 17 12:17:34.132: INFO: Deleting pod "simpletest-rc-to-be-deleted-kc7fs" in namespace "gc-9504" Aug 17 12:17:34.403: INFO: Deleting pod "simpletest-rc-to-be-deleted-ngf4p" in namespace "gc-9504" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:17:35.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9504" for this suite. • [SLOW TEST:82.282 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":129,"skipped":2285,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:17:35.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3765.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3765.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3765.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3765.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3765.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3765.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 12:17:45.534: INFO: DNS probes using dns-3765/dns-test-67a22942-2ce6-4f89-8df6-9a5d2da9baff succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:17:46.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3765" for this suite. • [SLOW TEST:10.491 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":130,"skipped":2288,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:17:46.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:17:46.709: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:17:53.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9030" for this suite. • [SLOW TEST:7.355 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":131,"skipped":2294,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:17:53.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:17:53.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-368" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":132,"skipped":2313,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:17:53.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 12:17:58.760: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 12:18:00.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263478, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263478, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263478, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263478, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:18:02.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263478, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263478, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263478, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263478, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:18:05.852: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:18:19.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6895" for this suite. STEP: Destroying namespace "webhook-6895-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.363 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":133,"skipped":2348,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:18:19.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:18:19.415: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 17 12:18:24.423: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 17 12:18:24.424: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 17 12:18:26.432: INFO: Creating deployment "test-rollover-deployment" Aug 17 12:18:26.466: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 17 12:18:28.491: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 17 12:18:28.502: INFO: Ensure that both replica sets have 1 created replica Aug 17 12:18:28.554: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 17 12:18:28.565: INFO: Updating deployment test-rollover-deployment Aug 17 12:18:28.566: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 17 12:18:30.626: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 17 12:18:30.637: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 17 12:18:30.649: INFO: all replica sets need to contain the pod-template-hash label Aug 17 12:18:30.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263508, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:18:32.851: INFO: all replica sets need to contain the pod-template-hash label Aug 17 12:18:32.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263508, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:18:34.665: INFO: all replica sets need to contain the pod-template-hash label Aug 17 12:18:34.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263513, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:18:36.665: INFO: all replica sets need to contain the pod-template-hash label Aug 17 12:18:36.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263513, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:18:39.121: INFO: all replica sets need to contain the pod-template-hash label Aug 17 12:18:39.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263513, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:18:40.665: INFO: all replica sets need to contain the pod-template-hash label Aug 17 12:18:40.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263513, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:18:42.665: INFO: all replica sets need to contain the pod-template-hash label Aug 17 12:18:42.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263513, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263506, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:18:44.667: INFO: Aug 17 12:18:44.667: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 17 12:18:44.694: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1281 /apis/apps/v1/namespaces/deployment-1281/deployments/test-rollover-deployment 91217873-44cf-4ea3-b241-f7a608486445 720475 2 2020-08-17 12:18:26 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-17 12:18:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-17 12:18:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40038370f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-17 12:18:26 +0000 UTC,LastTransitionTime:2020-08-17 12:18:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-08-17 12:18:43 +0000 UTC,LastTransitionTime:2020-08-17 12:18:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 17 12:18:44.702: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-1281 /apis/apps/v1/namespaces/deployment-1281/replicasets/test-rollover-deployment-5797c7764 30fe88c8-0527-4c1e-8f9f-f4e7380c5ad3 720464 2 2020-08-17 12:18:28 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 91217873-44cf-4ea3-b241-f7a608486445 0x40038375f0 0x40038375f1}] [] [{kube-controller-manager Update apps/v1 2020-08-17 12:18:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91217873-44cf-4ea3-b241-f7a608486445\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003837668 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 17 12:18:44.703: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 17 12:18:44.703: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1281 /apis/apps/v1/namespaces/deployment-1281/replicasets/test-rollover-controller 91b3098a-b535-461e-ba92-65a56be45cfc 720474 2 2020-08-17 12:18:19 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 91217873-44cf-4ea3-b241-f7a608486445 0x40038374e7 0x40038374e8}] [] [{e2e.test Update apps/v1 2020-08-17 12:18:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-17 12:18:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91217873-44cf-4ea3-b241-f7a608486445\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4003837588 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 17 12:18:44.704: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-1281 /apis/apps/v1/namespaces/deployment-1281/replicasets/test-rollover-deployment-78bc8b888c c1bb15af-469a-436b-ac40-28eecce8c0e0 720414 2 2020-08-17 12:18:26 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 91217873-44cf-4ea3-b241-f7a608486445 0x40038376d7 0x40038376d8}] [] [{kube-controller-manager Update apps/v1 2020-08-17 12:18:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91217873-44cf-4ea3-b241-f7a608486445\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003837768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 17 12:18:44.712: INFO: Pod "test-rollover-deployment-5797c7764-bnlsn" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-bnlsn test-rollover-deployment-5797c7764- deployment-1281 /api/v1/namespaces/deployment-1281/pods/test-rollover-deployment-5797c7764-bnlsn 03d13ccb-11ae-459e-8a6f-2b13c06d69f1 720436 0 2020-08-17 12:18:28 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 30fe88c8-0527-4c1e-8f9f-f4e7380c5ad3 0x4003837f80 0x4003837f81}] [] [{kube-controller-manager Update v1 2020-08-17 12:18:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30fe88c8-0527-4c1e-8f9f-f4e7380c5ad3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 12:18:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bccs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bccs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bccs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 12:18:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 12:18:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 12:18:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 12:18:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.9,StartTime:2020-08-17 12:18:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 12:18:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://dd56a2b374d1b91cff31a46970b27b1b3a1a14e291e9d14bec91139f87a2a9c8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:18:44.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1281" for this suite. • [SLOW TEST:25.407 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":134,"skipped":2415,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:18:44.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 17 12:18:44.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6285' Aug 17 12:18:53.604: INFO: stderr: "" Aug 17 12:18:53.604: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 17 12:18:53.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6285' Aug 17 12:18:54.987: INFO: stderr: "" Aug 17 12:18:54.988: INFO: stdout: "update-demo-nautilus-p4mx2 update-demo-nautilus-t6hfs " Aug 17 12:18:54.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p4mx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:18:56.541: INFO: stderr: "" Aug 17 12:18:56.541: INFO: stdout: "" Aug 17 12:18:56.541: INFO: update-demo-nautilus-p4mx2 is created but not running Aug 17 12:19:01.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6285' Aug 17 12:19:02.998: INFO: stderr: "" Aug 17 12:19:02.998: INFO: stdout: "update-demo-nautilus-p4mx2 update-demo-nautilus-t6hfs " Aug 17 12:19:02.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p4mx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:04.417: INFO: stderr: "" Aug 17 12:19:04.418: INFO: stdout: "true" Aug 17 12:19:04.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p4mx2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:07.009: INFO: stderr: "" Aug 17 12:19:07.009: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 17 12:19:07.009: INFO: validating pod update-demo-nautilus-p4mx2 Aug 17 12:19:07.038: INFO: got data: { "image": "nautilus.jpg" } Aug 17 12:19:07.038: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 17 12:19:07.038: INFO: update-demo-nautilus-p4mx2 is verified up and running Aug 17 12:19:07.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t6hfs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:08.522: INFO: stderr: "" Aug 17 12:19:08.522: INFO: stdout: "true" Aug 17 12:19:08.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t6hfs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:09.983: INFO: stderr: "" Aug 17 12:19:09.983: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 17 12:19:09.983: INFO: validating pod update-demo-nautilus-t6hfs Aug 17 12:19:10.705: INFO: got data: { "image": "nautilus.jpg" } Aug 17 12:19:10.705: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 17 12:19:10.705: INFO: update-demo-nautilus-t6hfs is verified up and running STEP: scaling down the replication controller Aug 17 12:19:10.719: INFO: scanned /root for discovery docs: Aug 17 12:19:10.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6285' Aug 17 12:19:13.422: INFO: stderr: "" Aug 17 12:19:13.422: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 17 12:19:13.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6285' Aug 17 12:19:14.993: INFO: stderr: "" Aug 17 12:19:14.993: INFO: stdout: "update-demo-nautilus-p4mx2 update-demo-nautilus-t6hfs " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 17 12:19:19.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6285' Aug 17 12:19:21.578: INFO: stderr: "" Aug 17 12:19:21.578: INFO: stdout: "update-demo-nautilus-p4mx2 " Aug 17 12:19:21.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p4mx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:22.932: INFO: stderr: "" Aug 17 12:19:22.932: INFO: stdout: "true" Aug 17 12:19:22.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p4mx2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:24.339: INFO: stderr: "" Aug 17 12:19:24.339: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 17 12:19:24.339: INFO: validating pod update-demo-nautilus-p4mx2 Aug 17 12:19:24.524: INFO: got data: { "image": "nautilus.jpg" } Aug 17 12:19:24.525: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 17 12:19:24.525: INFO: update-demo-nautilus-p4mx2 is verified up and running STEP: scaling up the replication controller Aug 17 12:19:24.532: INFO: scanned /root for discovery docs: Aug 17 12:19:24.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6285' Aug 17 12:19:26.961: INFO: stderr: "" Aug 17 12:19:26.961: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 17 12:19:26.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6285' Aug 17 12:19:28.536: INFO: stderr: "" Aug 17 12:19:28.536: INFO: stdout: "update-demo-nautilus-hhwwr update-demo-nautilus-p4mx2 " Aug 17 12:19:28.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhwwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:30.243: INFO: stderr: "" Aug 17 12:19:30.243: INFO: stdout: "" Aug 17 12:19:30.243: INFO: update-demo-nautilus-hhwwr is created but not running Aug 17 12:19:35.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6285' Aug 17 12:19:36.714: INFO: stderr: "" Aug 17 12:19:36.714: INFO: stdout: "update-demo-nautilus-hhwwr update-demo-nautilus-p4mx2 " Aug 17 12:19:36.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhwwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:38.210: INFO: stderr: "" Aug 17 12:19:38.210: INFO: stdout: "true" Aug 17 12:19:38.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhwwr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:39.677: INFO: stderr: "" Aug 17 12:19:39.678: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 17 12:19:39.678: INFO: validating pod update-demo-nautilus-hhwwr Aug 17 12:19:39.684: INFO: got data: { "image": "nautilus.jpg" } Aug 17 12:19:39.684: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 17 12:19:39.684: INFO: update-demo-nautilus-hhwwr is verified up and running Aug 17 12:19:39.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p4mx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:41.115: INFO: stderr: "" Aug 17 12:19:41.115: INFO: stdout: "true" Aug 17 12:19:41.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p4mx2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6285' Aug 17 12:19:42.582: INFO: stderr: "" Aug 17 12:19:42.583: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 17 12:19:42.583: INFO: validating pod update-demo-nautilus-p4mx2 Aug 17 12:19:42.588: INFO: got data: { "image": "nautilus.jpg" } Aug 17 12:19:42.588: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 17 12:19:42.588: INFO: update-demo-nautilus-p4mx2 is verified up and running STEP: using delete to clean up resources Aug 17 12:19:42.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6285' Aug 17 12:19:44.001: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 12:19:44.001: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 17 12:19:44.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6285' Aug 17 12:19:45.554: INFO: stderr: "No resources found in kubectl-6285 namespace.\n" Aug 17 12:19:45.554: INFO: stdout: "" Aug 17 12:19:45.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6285 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 17 12:19:46.995: INFO: stderr: "" Aug 17 12:19:46.995: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:19:46.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6285" for this suite. • [SLOW TEST:62.281 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":135,"skipped":2448,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:19:47.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Aug 17 12:19:47.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-8115 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 17 12:19:48.552: INFO: stderr: "" Aug 17 12:19:48.552: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Aug 17 12:19:48.553: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 17 12:19:48.553: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8115" to be "running and ready, or succeeded" Aug 17 12:19:49.075: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 521.44974ms Aug 17 12:19:51.082: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.528050787s Aug 17 12:19:53.089: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.535880616s Aug 17 12:19:53.090: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 17 12:19:53.090: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 17 12:19:53.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8115' Aug 17 12:19:54.568: INFO: stderr: "" Aug 17 12:19:54.568: INFO: stdout: "I0817 12:19:51.339326 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/lbh 451\nI0817 12:19:51.539468 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/74q 333\nI0817 12:19:51.739505 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/7gxm 424\nI0817 12:19:51.939645 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/rdq4 386\nI0817 12:19:52.139450 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/9794 336\nI0817 12:19:52.339381 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/br5 441\nI0817 12:19:52.539466 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/8vm 473\nI0817 12:19:52.739495 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/tlb5 216\nI0817 12:19:52.939479 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/d4p 501\nI0817 12:19:53.139494 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/kwwp 503\nI0817 12:19:53.339455 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/f2f 330\nI0817 12:19:53.539439 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/pdq 354\nI0817 12:19:53.739458 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/sh4n 443\nI0817 12:19:53.939434 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/hmrv 264\nI0817 12:19:54.139500 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/jg5 417\nI0817 12:19:54.339421 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/lv8 570\nI0817 12:19:54.539490 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/vgf4 584\n" STEP: limiting log lines Aug 17 12:19:54.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8115 --tail=1' Aug 17 12:19:56.074: INFO: stderr: "" Aug 17 12:19:56.074: INFO: stdout: "I0817 12:19:55.939452 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/w49 443\n" Aug 17 12:19:56.074: INFO: got output "I0817 12:19:55.939452 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/w49 443\n" STEP: limiting log bytes Aug 17 12:19:56.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8115 --limit-bytes=1' Aug 17 12:19:57.539: INFO: stderr: "" Aug 17 12:19:57.539: INFO: stdout: "I" Aug 17 12:19:57.539: INFO: got output "I" STEP: exposing timestamps Aug 17 12:19:57.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8115 --tail=1 --timestamps' Aug 17 12:19:58.987: INFO: stderr: "" Aug 17 12:19:58.987: INFO: stdout: "2020-08-17T12:19:58.939597300Z I0817 12:19:58.939444 1 logs_generator.go:76] 38 POST /api/v1/namespaces/ns/pods/vbhb 359\n" Aug 17 12:19:58.988: INFO: got output "2020-08-17T12:19:58.939597300Z I0817 12:19:58.939444 1 logs_generator.go:76] 38 POST /api/v1/namespaces/ns/pods/vbhb 359\n" STEP: restricting to a time range Aug 17 12:20:01.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8115 --since=1s' Aug 17 12:20:04.045: INFO: stderr: "" Aug 17 12:20:04.046: INFO: stdout: "I0817 12:20:02.939469 1 logs_generator.go:76] 58 POST /api/v1/namespaces/kube-system/pods/fr8b 204\nI0817 12:20:03.139517 1 logs_generator.go:76] 59 POST /api/v1/namespaces/default/pods/zl7 335\nI0817 12:20:03.339439 1 logs_generator.go:76] 60 PUT /api/v1/namespaces/kube-system/pods/8bg7 451\nI0817 12:20:03.539469 1 logs_generator.go:76] 61 GET /api/v1/namespaces/ns/pods/7rmg 575\nI0817 12:20:03.739484 1 logs_generator.go:76] 62 PUT /api/v1/namespaces/default/pods/g8t 511\nI0817 12:20:03.939424 1 logs_generator.go:76] 63 PUT /api/v1/namespaces/kube-system/pods/nvbf 371\n" Aug 17 12:20:04.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8115 --since=24h' Aug 17 12:20:05.492: INFO: stderr: "" Aug 17 12:20:05.493: INFO: stdout: "I0817 12:19:51.339326 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/lbh 451\nI0817 12:19:51.539468 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/74q 333\nI0817 12:19:51.739505 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/7gxm 424\nI0817 12:19:51.939645 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/rdq4 386\nI0817 12:19:52.139450 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/9794 336\nI0817 12:19:52.339381 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/br5 441\nI0817 12:19:52.539466 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/8vm 473\nI0817 12:19:52.739495 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/tlb5 216\nI0817 12:19:52.939479 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/d4p 501\nI0817 12:19:53.139494 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/kwwp 503\nI0817 12:19:53.339455 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/f2f 330\nI0817 12:19:53.539439 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/pdq 354\nI0817 12:19:53.739458 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/sh4n 443\nI0817 12:19:53.939434 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/hmrv 264\nI0817 12:19:54.139500 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/jg5 417\nI0817 12:19:54.339421 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/lv8 570\nI0817 12:19:54.539490 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/vgf4 584\nI0817 12:19:54.739505 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/jgds 527\nI0817 12:19:54.940203 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/s8r 334\nI0817 12:19:55.139471 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/n7j 442\nI0817 12:19:55.339444 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/d7w 226\nI0817 12:19:55.539432 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/pb54 506\nI0817 12:19:55.739348 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/whs7 573\nI0817 12:19:55.939452 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/w49 443\nI0817 12:19:56.139455 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/w275 278\nI0817 12:19:56.339396 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/kmbt 489\nI0817 12:19:56.539454 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/9lq 233\nI0817 12:19:56.739431 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/zblg 409\nI0817 12:19:56.939458 1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/84p 364\nI0817 12:19:57.139384 1 logs_generator.go:76] 29 POST /api/v1/namespaces/kube-system/pods/gctk 286\nI0817 12:19:57.339446 1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/blq9 522\nI0817 12:19:57.539430 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/2wf 544\nI0817 12:19:57.739469 1 logs_generator.go:76] 32 POST /api/v1/namespaces/ns/pods/bfv 506\nI0817 12:19:57.939376 1 logs_generator.go:76] 33 POST /api/v1/namespaces/ns/pods/zxjm 316\nI0817 12:19:58.139459 1 logs_generator.go:76] 34 POST /api/v1/namespaces/default/pods/fl8r 298\nI0817 12:19:58.339446 1 logs_generator.go:76] 35 PUT /api/v1/namespaces/kube-system/pods/wfn 256\nI0817 12:19:58.539426 1 logs_generator.go:76] 36 POST /api/v1/namespaces/default/pods/hknj 505\nI0817 12:19:58.739455 1 logs_generator.go:76] 37 POST /api/v1/namespaces/kube-system/pods/n7m6 227\nI0817 12:19:58.939444 1 logs_generator.go:76] 38 POST /api/v1/namespaces/ns/pods/vbhb 359\nI0817 12:19:59.139454 1 logs_generator.go:76] 39 GET /api/v1/namespaces/kube-system/pods/xkp 496\nI0817 12:19:59.339501 1 logs_generator.go:76] 40 PUT /api/v1/namespaces/kube-system/pods/lh2 435\nI0817 12:19:59.539455 1 logs_generator.go:76] 41 POST /api/v1/namespaces/default/pods/848n 276\nI0817 12:19:59.739485 1 logs_generator.go:76] 42 PUT /api/v1/namespaces/default/pods/cp6 473\nI0817 12:19:59.939501 1 logs_generator.go:76] 43 GET /api/v1/namespaces/kube-system/pods/5kpf 232\nI0817 12:20:00.139460 1 logs_generator.go:76] 44 PUT /api/v1/namespaces/default/pods/wtk 482\nI0817 12:20:00.339431 1 logs_generator.go:76] 45 GET /api/v1/namespaces/default/pods/v8z6 587\nI0817 12:20:00.539490 1 logs_generator.go:76] 46 POST /api/v1/namespaces/kube-system/pods/5sm 548\nI0817 12:20:00.739477 1 logs_generator.go:76] 47 PUT /api/v1/namespaces/default/pods/cnq 294\nI0817 12:20:00.939375 1 logs_generator.go:76] 48 PUT /api/v1/namespaces/ns/pods/mhq 326\nI0817 12:20:01.139475 1 logs_generator.go:76] 49 GET /api/v1/namespaces/ns/pods/bgj 510\nI0817 12:20:01.339497 1 logs_generator.go:76] 50 POST /api/v1/namespaces/default/pods/2zz 551\nI0817 12:20:01.539407 1 logs_generator.go:76] 51 GET /api/v1/namespaces/kube-system/pods/8wk 455\nI0817 12:20:01.739484 1 logs_generator.go:76] 52 POST /api/v1/namespaces/ns/pods/swx 568\nI0817 12:20:01.939454 1 logs_generator.go:76] 53 PUT /api/v1/namespaces/ns/pods/k6p 401\nI0817 12:20:02.139437 1 logs_generator.go:76] 54 GET /api/v1/namespaces/kube-system/pods/78b 363\nI0817 12:20:02.339400 1 logs_generator.go:76] 55 PUT /api/v1/namespaces/ns/pods/w5hc 447\nI0817 12:20:02.539433 1 logs_generator.go:76] 56 GET /api/v1/namespaces/default/pods/vbgh 446\nI0817 12:20:02.739426 1 logs_generator.go:76] 57 POST /api/v1/namespaces/kube-system/pods/vhm 456\nI0817 12:20:02.939469 1 logs_generator.go:76] 58 POST /api/v1/namespaces/kube-system/pods/fr8b 204\nI0817 12:20:03.139517 1 logs_generator.go:76] 59 POST /api/v1/namespaces/default/pods/zl7 335\nI0817 12:20:03.339439 1 logs_generator.go:76] 60 PUT /api/v1/namespaces/kube-system/pods/8bg7 451\nI0817 12:20:03.539469 1 logs_generator.go:76] 61 GET /api/v1/namespaces/ns/pods/7rmg 575\nI0817 12:20:03.739484 1 logs_generator.go:76] 62 PUT /api/v1/namespaces/default/pods/g8t 511\nI0817 12:20:03.939424 1 logs_generator.go:76] 63 PUT /api/v1/namespaces/kube-system/pods/nvbf 371\nI0817 12:20:04.139405 1 logs_generator.go:76] 64 POST /api/v1/namespaces/default/pods/9ttq 308\nI0817 12:20:04.339452 1 logs_generator.go:76] 65 PUT /api/v1/namespaces/ns/pods/hmfd 543\nI0817 12:20:04.539424 1 logs_generator.go:76] 66 POST /api/v1/namespaces/ns/pods/bvl 392\nI0817 12:20:04.739397 1 logs_generator.go:76] 67 GET /api/v1/namespaces/ns/pods/wqt 576\nI0817 12:20:04.939444 1 logs_generator.go:76] 68 GET /api/v1/namespaces/kube-system/pods/xl6 586\nI0817 12:20:05.139442 1 logs_generator.go:76] 69 POST /api/v1/namespaces/kube-system/pods/dw26 545\nI0817 12:20:05.339440 1 logs_generator.go:76] 70 PUT /api/v1/namespaces/kube-system/pods/d2bm 319\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Aug 17 12:20:05.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8115' Aug 17 12:20:10.285: INFO: stderr: "" Aug 17 12:20:10.285: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:20:10.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8115" for this suite. • [SLOW TEST:23.287 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":136,"skipped":2478,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:20:10.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 17 12:20:14.363: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 17 12:20:16.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263614, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263614, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263614, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263614, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:20:18.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263614, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263614, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263614, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263614, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:20:21.649: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:20:21.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:20:23.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8328" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:12.938 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":137,"skipped":2482,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:20:23.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-7137/secret-test-fadf067e-76ba-42da-a5dd-060500997ad6 STEP: Creating a pod to test consume secrets Aug 17 12:20:23.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-c60d1f2b-780f-4da4-b72b-a5a2ca78baa0" in namespace "secrets-7137" to be "Succeeded or Failed" Aug 17 12:20:23.364: INFO: Pod "pod-configmaps-c60d1f2b-780f-4da4-b72b-a5a2ca78baa0": Phase="Pending", Reason="", readiness=false. Elapsed: 27.091275ms Aug 17 12:20:25.371: INFO: Pod "pod-configmaps-c60d1f2b-780f-4da4-b72b-a5a2ca78baa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033582844s Aug 17 12:20:27.618: INFO: Pod "pod-configmaps-c60d1f2b-780f-4da4-b72b-a5a2ca78baa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.280590275s STEP: Saw pod success Aug 17 12:20:27.618: INFO: Pod "pod-configmaps-c60d1f2b-780f-4da4-b72b-a5a2ca78baa0" satisfied condition "Succeeded or Failed" Aug 17 12:20:27.758: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c60d1f2b-780f-4da4-b72b-a5a2ca78baa0 container env-test: STEP: delete the pod Aug 17 12:20:27.803: INFO: Waiting for pod pod-configmaps-c60d1f2b-780f-4da4-b72b-a5a2ca78baa0 to disappear Aug 17 12:20:27.826: INFO: Pod pod-configmaps-c60d1f2b-780f-4da4-b72b-a5a2ca78baa0 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:20:27.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7137" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":138,"skipped":2492,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:20:27.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9131 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9131 STEP: creating replication controller externalsvc in namespace services-9131 I0817 12:20:28.947203 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9131, replica count: 2 I0817 12:20:31.998484 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:20:34.999285 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:20:38.000102 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:20:41.001183 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 17 12:20:41.226: INFO: Creating new exec pod Aug 17 12:20:47.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-9131 execpodf79dh -- /bin/sh -x -c nslookup nodeport-service.services-9131.svc.cluster.local' Aug 17 12:20:49.264: INFO: stderr: "I0817 12:20:49.179966 2307 log.go:181] (0x4000c8f550) (0x4000c86960) Create stream\nI0817 12:20:49.184262 2307 log.go:181] (0x4000c8f550) (0x4000c86960) Stream added, broadcasting: 1\nI0817 12:20:49.199608 2307 log.go:181] (0x4000c8f550) Reply frame received for 1\nI0817 12:20:49.200152 2307 log.go:181] (0x4000c8f550) (0x40006a4000) Create stream\nI0817 12:20:49.200238 2307 log.go:181] (0x4000c8f550) (0x40006a4000) Stream added, broadcasting: 3\nI0817 12:20:49.201414 2307 log.go:181] (0x4000c8f550) Reply frame received for 3\nI0817 12:20:49.201643 2307 log.go:181] (0x4000c8f550) (0x4000c86000) Create stream\nI0817 12:20:49.201700 2307 log.go:181] (0x4000c8f550) (0x4000c86000) Stream added, broadcasting: 5\nI0817 12:20:49.202654 2307 log.go:181] (0x4000c8f550) Reply frame received for 5\nI0817 12:20:49.240620 2307 log.go:181] (0x4000c8f550) Data frame received for 5\nI0817 12:20:49.241104 2307 log.go:181] (0x4000c86000) (5) Data frame handling\n+ nslookup nodeport-service.services-9131.svc.cluster.local\nI0817 12:20:49.242793 2307 log.go:181] (0x4000c86000) (5) Data frame sent\nI0817 12:20:49.247259 2307 log.go:181] (0x4000c8f550) Data frame received for 3\nI0817 12:20:49.247350 2307 log.go:181] (0x40006a4000) (3) Data frame handling\nI0817 12:20:49.247412 2307 log.go:181] (0x40006a4000) (3) Data frame sent\nI0817 12:20:49.248251 2307 log.go:181] (0x4000c8f550) Data frame received for 3\nI0817 12:20:49.248338 2307 log.go:181] (0x40006a4000) (3) Data frame handling\nI0817 12:20:49.248447 2307 log.go:181] (0x40006a4000) (3) Data frame sent\nI0817 12:20:49.248578 2307 log.go:181] (0x4000c8f550) Data frame received for 5\nI0817 12:20:49.248660 2307 log.go:181] (0x4000c86000) (5) Data frame handling\nI0817 12:20:49.249135 2307 log.go:181] (0x4000c8f550) Data frame received for 3\nI0817 12:20:49.249236 2307 log.go:181] (0x40006a4000) (3) Data frame handling\nI0817 12:20:49.250357 2307 log.go:181] (0x4000c8f550) Data frame received for 1\nI0817 12:20:49.250413 2307 log.go:181] (0x4000c86960) (1) Data frame handling\nI0817 12:20:49.250466 2307 log.go:181] (0x4000c86960) (1) Data frame sent\nI0817 12:20:49.251733 2307 log.go:181] (0x4000c8f550) (0x4000c86960) Stream removed, broadcasting: 1\nI0817 12:20:49.254182 2307 log.go:181] (0x4000c8f550) Go away received\nI0817 12:20:49.256359 2307 log.go:181] (0x4000c8f550) (0x4000c86960) Stream removed, broadcasting: 1\nI0817 12:20:49.256974 2307 log.go:181] (0x4000c8f550) (0x40006a4000) Stream removed, broadcasting: 3\nI0817 12:20:49.257136 2307 log.go:181] (0x4000c8f550) (0x4000c86000) Stream removed, broadcasting: 5\n" Aug 17 12:20:49.265: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9131.svc.cluster.local\tcanonical name = externalsvc.services-9131.svc.cluster.local.\nName:\texternalsvc.services-9131.svc.cluster.local\nAddress: 10.106.229.103\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9131, will wait for the garbage collector to delete the pods Aug 17 12:20:49.776: INFO: Deleting ReplicationController externalsvc took: 452.829514ms Aug 17 12:20:50.176: INFO: Terminating ReplicationController externalsvc pods took: 400.845241ms Aug 17 12:21:01.037: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:21:01.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9131" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:34.438 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":139,"skipped":2495,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:21:02.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Aug 17 12:21:02.767: INFO: Waiting up to 5m0s for pod "var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2" in namespace "var-expansion-684" to be "Succeeded or Failed" Aug 17 12:21:02.834: INFO: Pod "var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 66.675999ms Aug 17 12:21:04.913: INFO: Pod "var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145703651s Aug 17 12:21:06.919: INFO: Pod "var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152286345s Aug 17 12:21:09.370: INFO: Pod "var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.602577874s Aug 17 12:21:11.377: INFO: Pod "var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.609607101s STEP: Saw pod success Aug 17 12:21:11.377: INFO: Pod "var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2" satisfied condition "Succeeded or Failed" Aug 17 12:21:11.382: INFO: Trying to get logs from node latest-worker pod var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2 container dapi-container: STEP: delete the pod Aug 17 12:21:11.819: INFO: Waiting for pod var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2 to disappear Aug 17 12:21:12.057: INFO: Pod var-expansion-50836ff4-2580-4455-933e-b38818e8f2f2 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:21:12.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-684" for this suite. • [SLOW TEST:9.838 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":140,"skipped":2502,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:21:12.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 12:21:12.747: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 12:21:12.790: INFO: Waiting for terminating namespaces to be deleted... Aug 17 12:21:12.794: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 12:21:12.801: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 12:21:12.801: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 12:21:12.801: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 12:21:12.801: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 12:21:12.801: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 12:21:12.809: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 12:21:12.809: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 12:21:12.809: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 12:21:12.809: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9091a0ac-b7f2-4be5-9245-6c079d0f3184 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-9091a0ac-b7f2-4be5-9245-6c079d0f3184 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-9091a0ac-b7f2-4be5-9245-6c079d0f3184 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:21:42.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1761" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:30.686 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":141,"skipped":2514,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:21:42.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 17 12:21:49.374: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:21:49.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1321" for this suite. • [SLOW TEST:7.313 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":142,"skipped":2528,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:21:50.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:23:51.762: INFO: Deleting pod "var-expansion-4b2e199d-333c-4530-8f1d-7cf4f5d1c97f" in namespace "var-expansion-2725" Aug 17 12:23:51.769: INFO: Wait up to 5m0s for pod "var-expansion-4b2e199d-333c-4530-8f1d-7cf4f5d1c97f" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:23:56.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2725" for this suite. • [SLOW TEST:126.337 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":143,"skipped":2528,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:23:56.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 12:24:00.283: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 12:24:02.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:24:04.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:24:06.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733263840, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:24:09.433: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:24:09.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3331" for this suite. STEP: Destroying namespace "webhook-3331-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.407 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":144,"skipped":2536,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:24:09.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:24:21.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5437" for this suite. • [SLOW TEST:11.674 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":145,"skipped":2537,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:24:21.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 17 12:24:21.639: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:24:34.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-54" for this suite. • [SLOW TEST:13.404 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":146,"skipped":2567,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:24:34.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 17 12:24:35.033: INFO: Waiting up to 5m0s for pod "downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a" in namespace "downward-api-6803" to be "Succeeded or Failed" Aug 17 12:24:35.061: INFO: Pod "downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.358602ms Aug 17 12:24:37.074: INFO: Pod "downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040879555s Aug 17 12:24:39.806: INFO: Pod "downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.773156549s Aug 17 12:24:42.100: INFO: Pod "downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a": Phase="Running", Reason="", readiness=true. Elapsed: 7.067113397s Aug 17 12:24:44.113: INFO: Pod "downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.080584014s STEP: Saw pod success Aug 17 12:24:44.114: INFO: Pod "downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a" satisfied condition "Succeeded or Failed" Aug 17 12:24:44.319: INFO: Trying to get logs from node latest-worker pod downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a container dapi-container: STEP: delete the pod Aug 17 12:24:44.697: INFO: Waiting for pod downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a to disappear Aug 17 12:24:44.739: INFO: Pod downward-api-50f73da5-a2b7-4d45-a440-c3015f74747a no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:24:44.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6803" for this suite. • [SLOW TEST:10.075 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":147,"skipped":2578,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:24:45.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 17 12:24:46.406: INFO: Waiting up to 5m0s for pod "pod-32ed44b1-775d-49fa-a11d-c292192ab249" in namespace "emptydir-5465" to be "Succeeded or Failed" Aug 17 12:24:46.553: INFO: Pod "pod-32ed44b1-775d-49fa-a11d-c292192ab249": Phase="Pending", Reason="", readiness=false. Elapsed: 146.664557ms Aug 17 12:24:48.559: INFO: Pod "pod-32ed44b1-775d-49fa-a11d-c292192ab249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153371861s Aug 17 12:24:50.710: INFO: Pod "pod-32ed44b1-775d-49fa-a11d-c292192ab249": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303961523s Aug 17 12:24:52.827: INFO: Pod "pod-32ed44b1-775d-49fa-a11d-c292192ab249": Phase="Running", Reason="", readiness=true. Elapsed: 6.421027172s Aug 17 12:24:54.834: INFO: Pod "pod-32ed44b1-775d-49fa-a11d-c292192ab249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.427769161s STEP: Saw pod success Aug 17 12:24:54.834: INFO: Pod "pod-32ed44b1-775d-49fa-a11d-c292192ab249" satisfied condition "Succeeded or Failed" Aug 17 12:24:54.990: INFO: Trying to get logs from node latest-worker pod pod-32ed44b1-775d-49fa-a11d-c292192ab249 container test-container: STEP: delete the pod Aug 17 12:24:55.366: INFO: Waiting for pod pod-32ed44b1-775d-49fa-a11d-c292192ab249 to disappear Aug 17 12:24:55.515: INFO: Pod pod-32ed44b1-775d-49fa-a11d-c292192ab249 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:24:55.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5465" for this suite. • [SLOW TEST:10.498 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":148,"skipped":2611,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:24:55.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-e9b68a82-d7e4-47b1-96ad-3ebe27c56290 STEP: Creating secret with name s-test-opt-upd-abf1edea-4c22-4d03-9f6c-2ca8bcd22a16 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e9b68a82-d7e4-47b1-96ad-3ebe27c56290 STEP: Updating secret s-test-opt-upd-abf1edea-4c22-4d03-9f6c-2ca8bcd22a16 STEP: Creating secret with name s-test-opt-create-4a10b005-48ae-4dae-b900-0c25b70f601e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:25:12.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7612" for this suite. • [SLOW TEST:17.121 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":149,"skipped":2638,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:25:12.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 17 12:25:29.615: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 17 12:25:29.620: INFO: Pod pod-with-poststart-http-hook still exists Aug 17 12:25:31.621: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 17 12:25:31.629: INFO: Pod pod-with-poststart-http-hook still exists Aug 17 12:25:33.621: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 17 12:25:33.629: INFO: Pod pod-with-poststart-http-hook still exists Aug 17 12:25:35.621: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 17 12:25:35.628: INFO: Pod pod-with-poststart-http-hook still exists Aug 17 12:25:37.621: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 17 12:25:37.628: INFO: Pod pod-with-poststart-http-hook still exists Aug 17 12:25:39.621: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 17 12:25:39.627: INFO: Pod pod-with-poststart-http-hook still exists Aug 17 12:25:41.621: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 17 12:25:41.627: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:25:41.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5768" for this suite. • [SLOW TEST:28.990 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":150,"skipped":2657,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:25:41.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 17 12:25:41.738: INFO: Waiting up to 5m0s for pod "downward-api-4c6e5cea-3f25-45be-a7d7-c14cfbc66038" in namespace "downward-api-8293" to be "Succeeded or Failed" Aug 17 12:25:41.751: INFO: Pod "downward-api-4c6e5cea-3f25-45be-a7d7-c14cfbc66038": Phase="Pending", Reason="", readiness=false. Elapsed: 13.545418ms Aug 17 12:25:43.759: INFO: Pod "downward-api-4c6e5cea-3f25-45be-a7d7-c14cfbc66038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02131427s Aug 17 12:25:45.767: INFO: Pod "downward-api-4c6e5cea-3f25-45be-a7d7-c14cfbc66038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029272513s STEP: Saw pod success Aug 17 12:25:45.767: INFO: Pod "downward-api-4c6e5cea-3f25-45be-a7d7-c14cfbc66038" satisfied condition "Succeeded or Failed" Aug 17 12:25:45.774: INFO: Trying to get logs from node latest-worker2 pod downward-api-4c6e5cea-3f25-45be-a7d7-c14cfbc66038 container dapi-container: STEP: delete the pod Aug 17 12:25:45.813: INFO: Waiting for pod downward-api-4c6e5cea-3f25-45be-a7d7-c14cfbc66038 to disappear Aug 17 12:25:45.824: INFO: Pod downward-api-4c6e5cea-3f25-45be-a7d7-c14cfbc66038 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:25:45.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8293" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2662,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:25:45.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0817 12:25:48.117284 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 12:26:50.177: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:26:50.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4210" for this suite. • [SLOW TEST:64.351 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":152,"skipped":2689,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:26:50.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Aug 17 12:26:50.294: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix915549293/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:26:51.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6187" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":153,"skipped":2695,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:26:51.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-2c6fb586-3fdd-4ff9-95dc-e9ac06ed0e54 STEP: Creating a pod to test consume configMaps Aug 17 12:26:51.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-142eff6c-e091-4d37-86d2-428c21fcfbcd" in namespace "configmap-6033" to be "Succeeded or Failed" Aug 17 12:26:51.646: INFO: Pod "pod-configmaps-142eff6c-e091-4d37-86d2-428c21fcfbcd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.72559ms Aug 17 12:26:53.652: INFO: Pod "pod-configmaps-142eff6c-e091-4d37-86d2-428c21fcfbcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030395523s Aug 17 12:26:55.952: INFO: Pod "pod-configmaps-142eff6c-e091-4d37-86d2-428c21fcfbcd": Phase="Running", Reason="", readiness=true. Elapsed: 4.330564816s Aug 17 12:26:58.694: INFO: Pod "pod-configmaps-142eff6c-e091-4d37-86d2-428c21fcfbcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.072831101s STEP: Saw pod success Aug 17 12:26:58.694: INFO: Pod "pod-configmaps-142eff6c-e091-4d37-86d2-428c21fcfbcd" satisfied condition "Succeeded or Failed" Aug 17 12:26:58.981: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-142eff6c-e091-4d37-86d2-428c21fcfbcd container configmap-volume-test: STEP: delete the pod Aug 17 12:26:59.908: INFO: Waiting for pod pod-configmaps-142eff6c-e091-4d37-86d2-428c21fcfbcd to disappear Aug 17 12:27:00.003: INFO: Pod pod-configmaps-142eff6c-e091-4d37-86d2-428c21fcfbcd no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:27:00.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6033" for this suite. • [SLOW TEST:9.125 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":154,"skipped":2725,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:27:00.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:27:01.989: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b7e345fc-48e6-4b58-a0c1-a679bfcfc44f" in namespace "security-context-test-7297" to be "Succeeded or Failed" Aug 17 12:27:02.605: INFO: Pod "busybox-readonly-false-b7e345fc-48e6-4b58-a0c1-a679bfcfc44f": Phase="Pending", Reason="", readiness=false. Elapsed: 615.572461ms Aug 17 12:27:04.877: INFO: Pod "busybox-readonly-false-b7e345fc-48e6-4b58-a0c1-a679bfcfc44f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.887529812s Aug 17 12:27:06.939: INFO: Pod "busybox-readonly-false-b7e345fc-48e6-4b58-a0c1-a679bfcfc44f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.94991985s Aug 17 12:27:09.123: INFO: Pod "busybox-readonly-false-b7e345fc-48e6-4b58-a0c1-a679bfcfc44f": Phase="Running", Reason="", readiness=true. Elapsed: 7.134277955s Aug 17 12:27:11.130: INFO: Pod "busybox-readonly-false-b7e345fc-48e6-4b58-a0c1-a679bfcfc44f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.141325833s Aug 17 12:27:11.131: INFO: Pod "busybox-readonly-false-b7e345fc-48e6-4b58-a0c1-a679bfcfc44f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:27:11.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7297" for this suite. • [SLOW TEST:10.524 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":155,"skipped":2734,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:27:11.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Aug 17 12:27:11.321: INFO: Waiting up to 5m0s for pod "var-expansion-665983bc-52e1-4afd-9750-4bc545eec023" in namespace "var-expansion-74" to be "Succeeded or Failed" Aug 17 12:27:11.404: INFO: Pod "var-expansion-665983bc-52e1-4afd-9750-4bc545eec023": Phase="Pending", Reason="", readiness=false. Elapsed: 83.314603ms Aug 17 12:27:13.413: INFO: Pod "var-expansion-665983bc-52e1-4afd-9750-4bc545eec023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091979116s Aug 17 12:27:15.419: INFO: Pod "var-expansion-665983bc-52e1-4afd-9750-4bc545eec023": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098277256s Aug 17 12:27:17.799: INFO: Pod "var-expansion-665983bc-52e1-4afd-9750-4bc545eec023": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478311709s Aug 17 12:27:19.933: INFO: Pod "var-expansion-665983bc-52e1-4afd-9750-4bc545eec023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.611763338s STEP: Saw pod success Aug 17 12:27:19.933: INFO: Pod "var-expansion-665983bc-52e1-4afd-9750-4bc545eec023" satisfied condition "Succeeded or Failed" Aug 17 12:27:19.939: INFO: Trying to get logs from node latest-worker2 pod var-expansion-665983bc-52e1-4afd-9750-4bc545eec023 container dapi-container: STEP: delete the pod Aug 17 12:27:20.641: INFO: Waiting for pod var-expansion-665983bc-52e1-4afd-9750-4bc545eec023 to disappear Aug 17 12:27:20.859: INFO: Pod var-expansion-665983bc-52e1-4afd-9750-4bc545eec023 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:27:20.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-74" for this suite. • [SLOW TEST:9.724 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":156,"skipped":2765,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:27:20.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 12:27:21.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3754dc7-cadc-41c7-b8fb-66ce5ccb1eb3" in namespace "downward-api-5184" to be "Succeeded or Failed" Aug 17 12:27:21.975: INFO: Pod "downwardapi-volume-f3754dc7-cadc-41c7-b8fb-66ce5ccb1eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 446.207392ms Aug 17 12:27:24.053: INFO: Pod "downwardapi-volume-f3754dc7-cadc-41c7-b8fb-66ce5ccb1eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524204344s Aug 17 12:27:26.059: INFO: Pod "downwardapi-volume-f3754dc7-cadc-41c7-b8fb-66ce5ccb1eb3": Phase="Running", Reason="", readiness=true. Elapsed: 4.530477285s Aug 17 12:27:28.067: INFO: Pod "downwardapi-volume-f3754dc7-cadc-41c7-b8fb-66ce5ccb1eb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.538358012s STEP: Saw pod success Aug 17 12:27:28.067: INFO: Pod "downwardapi-volume-f3754dc7-cadc-41c7-b8fb-66ce5ccb1eb3" satisfied condition "Succeeded or Failed" Aug 17 12:27:28.073: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f3754dc7-cadc-41c7-b8fb-66ce5ccb1eb3 container client-container: STEP: delete the pod Aug 17 12:27:28.131: INFO: Waiting for pod downwardapi-volume-f3754dc7-cadc-41c7-b8fb-66ce5ccb1eb3 to disappear Aug 17 12:27:28.140: INFO: Pod downwardapi-volume-f3754dc7-cadc-41c7-b8fb-66ce5ccb1eb3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:27:28.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5184" for this suite. • [SLOW TEST:7.296 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":157,"skipped":2774,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:27:28.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:27:44.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8713" for this suite. • [SLOW TEST:16.379 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":158,"skipped":2775,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:27:44.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 17 12:27:52.765: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 17 12:27:52.796: INFO: Pod pod-with-poststart-exec-hook still exists Aug 17 12:27:54.797: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 17 12:27:54.819: INFO: Pod pod-with-poststart-exec-hook still exists Aug 17 12:27:56.796: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 17 12:27:56.813: INFO: Pod pod-with-poststart-exec-hook still exists Aug 17 12:27:58.797: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 17 12:27:58.804: INFO: Pod pod-with-poststart-exec-hook still exists Aug 17 12:28:00.797: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 17 12:28:00.803: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:00.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3011" for this suite. • [SLOW TEST:16.261 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":159,"skipped":2832,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:00.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Aug 17 12:28:00.942: INFO: Waiting up to 5m0s for pod "client-containers-e88907b0-9607-4119-81bb-7d028bc0773d" in namespace "containers-673" to be "Succeeded or Failed" Aug 17 12:28:00.951: INFO: Pod "client-containers-e88907b0-9607-4119-81bb-7d028bc0773d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371235ms Aug 17 12:28:03.101: INFO: Pod "client-containers-e88907b0-9607-4119-81bb-7d028bc0773d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158807388s Aug 17 12:28:05.112: INFO: Pod "client-containers-e88907b0-9607-4119-81bb-7d028bc0773d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.169745491s STEP: Saw pod success Aug 17 12:28:05.112: INFO: Pod "client-containers-e88907b0-9607-4119-81bb-7d028bc0773d" satisfied condition "Succeeded or Failed" Aug 17 12:28:05.117: INFO: Trying to get logs from node latest-worker pod client-containers-e88907b0-9607-4119-81bb-7d028bc0773d container test-container: STEP: delete the pod Aug 17 12:28:05.159: INFO: Waiting for pod client-containers-e88907b0-9607-4119-81bb-7d028bc0773d to disappear Aug 17 12:28:05.175: INFO: Pod client-containers-e88907b0-9607-4119-81bb-7d028bc0773d no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:05.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-673" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":160,"skipped":2839,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:05.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 17 12:28:05.433: INFO: Waiting up to 5m0s for pod "pod-b6394db2-9c97-4909-bc71-9e28180233b9" in namespace "emptydir-577" to be "Succeeded or Failed" Aug 17 12:28:05.454: INFO: Pod "pod-b6394db2-9c97-4909-bc71-9e28180233b9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.975769ms Aug 17 12:28:07.471: INFO: Pod "pod-b6394db2-9c97-4909-bc71-9e28180233b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037566369s Aug 17 12:28:09.478: INFO: Pod "pod-b6394db2-9c97-4909-bc71-9e28180233b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044509015s STEP: Saw pod success Aug 17 12:28:09.478: INFO: Pod "pod-b6394db2-9c97-4909-bc71-9e28180233b9" satisfied condition "Succeeded or Failed" Aug 17 12:28:09.501: INFO: Trying to get logs from node latest-worker2 pod pod-b6394db2-9c97-4909-bc71-9e28180233b9 container test-container: STEP: delete the pod Aug 17 12:28:09.538: INFO: Waiting for pod pod-b6394db2-9c97-4909-bc71-9e28180233b9 to disappear Aug 17 12:28:09.553: INFO: Pod pod-b6394db2-9c97-4909-bc71-9e28180233b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:09.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-577" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":161,"skipped":2842,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:09.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Aug 17 12:28:09.711: INFO: created test-event-1 Aug 17 12:28:09.772: INFO: created test-event-2 Aug 17 12:28:09.830: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Aug 17 12:28:09.843: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Aug 17 12:28:10.037: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:10.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7111" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":162,"skipped":2849,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:10.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 17 12:28:10.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3641' Aug 17 12:28:11.728: INFO: stderr: "" Aug 17 12:28:11.729: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Aug 17 12:28:11.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3641' Aug 17 12:28:20.101: INFO: stderr: "" Aug 17 12:28:20.101: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:20.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3641" for this suite. • [SLOW TEST:10.062 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":163,"skipped":2856,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:20.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-944e5b2c-4440-46de-9fb7-ac2c22810981 STEP: Creating secret with name secret-projected-all-test-volume-8b3369fd-dd3d-44ed-a14c-1d1e628cc853 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 17 12:28:20.227: INFO: Waiting up to 5m0s for pod "projected-volume-00851e5d-8862-41bf-98d9-5e26f7338af7" in namespace "projected-5364" to be "Succeeded or Failed" Aug 17 12:28:20.243: INFO: Pod "projected-volume-00851e5d-8862-41bf-98d9-5e26f7338af7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.089071ms Aug 17 12:28:22.368: INFO: Pod "projected-volume-00851e5d-8862-41bf-98d9-5e26f7338af7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140132026s Aug 17 12:28:24.418: INFO: Pod "projected-volume-00851e5d-8862-41bf-98d9-5e26f7338af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.190655293s STEP: Saw pod success Aug 17 12:28:24.419: INFO: Pod "projected-volume-00851e5d-8862-41bf-98d9-5e26f7338af7" satisfied condition "Succeeded or Failed" Aug 17 12:28:24.425: INFO: Trying to get logs from node latest-worker pod projected-volume-00851e5d-8862-41bf-98d9-5e26f7338af7 container projected-all-volume-test: STEP: delete the pod Aug 17 12:28:24.557: INFO: Waiting for pod projected-volume-00851e5d-8862-41bf-98d9-5e26f7338af7 to disappear Aug 17 12:28:24.565: INFO: Pod projected-volume-00851e5d-8862-41bf-98d9-5e26f7338af7 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:24.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5364" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2886,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:24.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 12:28:24.699: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a0707e2-a9db-4b6b-9c44-2c521bcab6f7" in namespace "downward-api-529" to be "Succeeded or Failed" Aug 17 12:28:24.718: INFO: Pod "downwardapi-volume-3a0707e2-a9db-4b6b-9c44-2c521bcab6f7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.785338ms Aug 17 12:28:26.725: INFO: Pod "downwardapi-volume-3a0707e2-a9db-4b6b-9c44-2c521bcab6f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026452091s Aug 17 12:28:28.734: INFO: Pod "downwardapi-volume-3a0707e2-a9db-4b6b-9c44-2c521bcab6f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034706916s STEP: Saw pod success Aug 17 12:28:28.734: INFO: Pod "downwardapi-volume-3a0707e2-a9db-4b6b-9c44-2c521bcab6f7" satisfied condition "Succeeded or Failed" Aug 17 12:28:28.740: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3a0707e2-a9db-4b6b-9c44-2c521bcab6f7 container client-container: STEP: delete the pod Aug 17 12:28:28.779: INFO: Waiting for pod downwardapi-volume-3a0707e2-a9db-4b6b-9c44-2c521bcab6f7 to disappear Aug 17 12:28:28.818: INFO: Pod downwardapi-volume-3a0707e2-a9db-4b6b-9c44-2c521bcab6f7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:28.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-529" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":165,"skipped":2920,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:28.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:29.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1250" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":166,"skipped":2920,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:29.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-7791acff-c7c1-4b50-8d51-65dc4e55ad26 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:29.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6914" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":167,"skipped":2925,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:29.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 17 12:28:33.418: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:33.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2493" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2925,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:33.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 17 12:28:38.230: INFO: Successfully updated pod "labelsupdate46e4cac5-564e-40ff-98e7-f90e874f0ba7" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:40.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6248" for this suite. • [SLOW TEST:6.704 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2953,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:40.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Aug 17 12:28:40.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config cluster-info' Aug 17 12:28:41.764: INFO: stderr: "" Aug 17 12:28:41.764: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:41.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2980" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":170,"skipped":2980,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:41.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-f2fe410e-0a13-4bf9-8e84-107f318436b2 STEP: Creating a pod to test consume secrets Aug 17 12:28:42.067: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7abf9d00-d849-4f4c-a9c1-b300635aa225" in namespace "projected-1426" to be "Succeeded or Failed" Aug 17 12:28:42.114: INFO: Pod "pod-projected-secrets-7abf9d00-d849-4f4c-a9c1-b300635aa225": Phase="Pending", Reason="", readiness=false. Elapsed: 46.521868ms Aug 17 12:28:44.796: INFO: Pod "pod-projected-secrets-7abf9d00-d849-4f4c-a9c1-b300635aa225": Phase="Pending", Reason="", readiness=false. Elapsed: 2.728635791s Aug 17 12:28:46.887: INFO: Pod "pod-projected-secrets-7abf9d00-d849-4f4c-a9c1-b300635aa225": Phase="Pending", Reason="", readiness=false. Elapsed: 4.819126618s Aug 17 12:28:48.895: INFO: Pod "pod-projected-secrets-7abf9d00-d849-4f4c-a9c1-b300635aa225": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.827178592s STEP: Saw pod success Aug 17 12:28:48.895: INFO: Pod "pod-projected-secrets-7abf9d00-d849-4f4c-a9c1-b300635aa225" satisfied condition "Succeeded or Failed" Aug 17 12:28:48.900: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7abf9d00-d849-4f4c-a9c1-b300635aa225 container projected-secret-volume-test: STEP: delete the pod Aug 17 12:28:48.930: INFO: Waiting for pod pod-projected-secrets-7abf9d00-d849-4f4c-a9c1-b300635aa225 to disappear Aug 17 12:28:48.934: INFO: Pod pod-projected-secrets-7abf9d00-d849-4f4c-a9c1-b300635aa225 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:48.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1426" for this suite. • [SLOW TEST:7.204 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2989,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:48.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Aug 17 12:28:49.092: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:28:50.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2859" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":172,"skipped":2994,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:28:50.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 12:28:50.576: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 12:28:50.592: INFO: Waiting for terminating namespaces to be deleted... Aug 17 12:28:50.597: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 12:28:50.604: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 12:28:50.604: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 12:28:50.604: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 12:28:50.604: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 12:28:50.604: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 12:28:50.611: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 12:28:50.611: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 12:28:50.612: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 12:28:50.612: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c57279bb-0974-4c1e-a0cc-d39ab31c52cf 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c57279bb-0974-4c1e-a0cc-d39ab31c52cf off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c57279bb-0974-4c1e-a0cc-d39ab31c52cf [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:29:01.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3115" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.789 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":173,"skipped":3029,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:29:01.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7474 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 17 12:29:01.404: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 17 12:29:01.943: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:29:03.950: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:29:05.959: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:29:07.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:29:09.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:29:11.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:29:13.950: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:29:15.950: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:29:17.950: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 17 12:29:17.961: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 17 12:29:19.969: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 17 12:29:21.969: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 17 12:29:23.969: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 17 12:29:30.031: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.31:8080/dial?request=hostname&protocol=udp&host=10.244.2.67&port=8081&tries=1'] Namespace:pod-network-test-7474 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 12:29:30.031: INFO: >>> kubeConfig: /root/.kube/config I0817 12:29:30.095126 10 log.go:181] (0x4005b30580) (0x4002504b40) Create stream I0817 12:29:30.095262 10 log.go:181] (0x4005b30580) (0x4002504b40) Stream added, broadcasting: 1 I0817 12:29:30.098418 10 log.go:181] (0x4005b30580) Reply frame received for 1 I0817 12:29:30.098536 10 log.go:181] (0x4005b30580) (0x4002504be0) Create stream I0817 12:29:30.098612 10 log.go:181] (0x4005b30580) (0x4002504be0) Stream added, broadcasting: 3 I0817 12:29:30.099544 10 log.go:181] (0x4005b30580) Reply frame received for 3 I0817 12:29:30.099674 10 log.go:181] (0x4005b30580) (0x4002d46960) Create stream I0817 12:29:30.099762 10 log.go:181] (0x4005b30580) (0x4002d46960) Stream added, broadcasting: 5 I0817 12:29:30.100803 10 log.go:181] (0x4005b30580) Reply frame received for 5 I0817 12:29:30.185755 10 log.go:181] (0x4005b30580) Data frame received for 3 I0817 12:29:30.185947 10 log.go:181] (0x4005b30580) Data frame received for 5 I0817 12:29:30.186076 10 log.go:181] (0x4002d46960) (5) Data frame handling I0817 12:29:30.186221 10 log.go:181] (0x4002504be0) (3) Data frame handling I0817 12:29:30.186377 10 log.go:181] (0x4002504be0) (3) Data frame sent I0817 12:29:30.186488 10 log.go:181] (0x4005b30580) Data frame received for 3 I0817 12:29:30.186559 10 log.go:181] (0x4002504be0) (3) Data frame handling I0817 12:29:30.187781 10 log.go:181] (0x4005b30580) Data frame received for 1 I0817 12:29:30.187860 10 log.go:181] (0x4002504b40) (1) Data frame handling I0817 12:29:30.187951 10 log.go:181] (0x4002504b40) (1) Data frame sent I0817 12:29:30.188067 10 log.go:181] (0x4005b30580) (0x4002504b40) Stream removed, broadcasting: 1 I0817 12:29:30.188205 10 log.go:181] (0x4005b30580) Go away received I0817 12:29:30.188512 10 log.go:181] (0x4005b30580) (0x4002504b40) Stream removed, broadcasting: 1 I0817 12:29:30.188602 10 log.go:181] (0x4005b30580) (0x4002504be0) Stream removed, broadcasting: 3 I0817 12:29:30.188665 10 log.go:181] (0x4005b30580) (0x4002d46960) Stream removed, broadcasting: 5 Aug 17 12:29:30.188: INFO: Waiting for responses: map[] Aug 17 12:29:30.194: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.31:8080/dial?request=hostname&protocol=udp&host=10.244.1.30&port=8081&tries=1'] Namespace:pod-network-test-7474 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 12:29:30.194: INFO: >>> kubeConfig: /root/.kube/config I0817 12:29:30.242411 10 log.go:181] (0x40058826e0) (0x4002e06b40) Create stream I0817 12:29:30.242525 10 log.go:181] (0x40058826e0) (0x4002e06b40) Stream added, broadcasting: 1 I0817 12:29:30.247743 10 log.go:181] (0x40058826e0) Reply frame received for 1 I0817 12:29:30.247924 10 log.go:181] (0x40058826e0) (0x4002e06be0) Create stream I0817 12:29:30.248027 10 log.go:181] (0x40058826e0) (0x4002e06be0) Stream added, broadcasting: 3 I0817 12:29:30.249700 10 log.go:181] (0x40058826e0) Reply frame received for 3 I0817 12:29:30.249882 10 log.go:181] (0x40058826e0) (0x40020f0000) Create stream I0817 12:29:30.249971 10 log.go:181] (0x40058826e0) (0x40020f0000) Stream added, broadcasting: 5 I0817 12:29:30.251423 10 log.go:181] (0x40058826e0) Reply frame received for 5 I0817 12:29:30.312943 10 log.go:181] (0x40058826e0) Data frame received for 3 I0817 12:29:30.313143 10 log.go:181] (0x4002e06be0) (3) Data frame handling I0817 12:29:30.313304 10 log.go:181] (0x4002e06be0) (3) Data frame sent I0817 12:29:30.313791 10 log.go:181] (0x40058826e0) Data frame received for 3 I0817 12:29:30.313971 10 log.go:181] (0x4002e06be0) (3) Data frame handling I0817 12:29:30.314131 10 log.go:181] (0x40058826e0) Data frame received for 5 I0817 12:29:30.314246 10 log.go:181] (0x40020f0000) (5) Data frame handling I0817 12:29:30.315348 10 log.go:181] (0x40058826e0) Data frame received for 1 I0817 12:29:30.315561 10 log.go:181] (0x4002e06b40) (1) Data frame handling I0817 12:29:30.315708 10 log.go:181] (0x4002e06b40) (1) Data frame sent I0817 12:29:30.315920 10 log.go:181] (0x40058826e0) (0x4002e06b40) Stream removed, broadcasting: 1 I0817 12:29:30.316161 10 log.go:181] (0x40058826e0) Go away received I0817 12:29:30.316436 10 log.go:181] (0x40058826e0) (0x4002e06b40) Stream removed, broadcasting: 1 I0817 12:29:30.316650 10 log.go:181] (0x40058826e0) (0x4002e06be0) Stream removed, broadcasting: 3 I0817 12:29:30.317051 10 log.go:181] (0x40058826e0) (0x40020f0000) Stream removed, broadcasting: 5 Aug 17 12:29:30.317: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:29:30.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7474" for this suite. • [SLOW TEST:29.374 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":174,"skipped":3051,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:29:30.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Aug 17 12:31:31.415: INFO: Successfully updated pod "var-expansion-efdbbe72-ca20-43af-81af-0aea624cb959" STEP: waiting for pod running STEP: deleting the pod gracefully Aug 17 12:31:35.480: INFO: Deleting pod "var-expansion-efdbbe72-ca20-43af-81af-0aea624cb959" in namespace "var-expansion-2398" Aug 17 12:31:35.488: INFO: Wait up to 5m0s for pod "var-expansion-efdbbe72-ca20-43af-81af-0aea624cb959" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:32:11.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2398" for this suite. • [SLOW TEST:160.955 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":175,"skipped":3051,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:32:11.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-a0168407-422d-4b80-92d0-290567d3dc11 STEP: Creating a pod to test consume configMaps Aug 17 12:32:12.037: INFO: Waiting up to 5m0s for pod "pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d" in namespace "configmap-3676" to be "Succeeded or Failed" Aug 17 12:32:12.255: INFO: Pod "pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d": Phase="Pending", Reason="", readiness=false. Elapsed: 217.262915ms Aug 17 12:32:14.263: INFO: Pod "pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225410832s Aug 17 12:32:16.379: INFO: Pod "pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341953393s Aug 17 12:32:18.670: INFO: Pod "pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632102112s Aug 17 12:32:20.725: INFO: Pod "pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.687064498s STEP: Saw pod success Aug 17 12:32:20.725: INFO: Pod "pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d" satisfied condition "Succeeded or Failed" Aug 17 12:32:20.800: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d container configmap-volume-test: STEP: delete the pod Aug 17 12:32:21.512: INFO: Waiting for pod pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d to disappear Aug 17 12:32:21.561: INFO: Pod pod-configmaps-25186d47-ea86-49f8-abdd-a5c0f28f167d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:32:21.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3676" for this suite. • [SLOW TEST:10.350 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":176,"skipped":3054,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:32:21.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 12:32:26.294: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 12:32:28.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264346, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264346, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264346, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264345, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:32:31.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264346, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264346, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264346, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264345, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:32:33.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264346, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264346, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264346, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264345, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:32:36.233: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:32:36.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1332" for this suite. STEP: Destroying namespace "webhook-1332-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.990 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":177,"skipped":3065,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:32:36.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-e370570a-72f8-4196-8556-9783aa7b8a8d in namespace container-probe-7722 Aug 17 12:32:43.507: INFO: Started pod liveness-e370570a-72f8-4196-8556-9783aa7b8a8d in namespace container-probe-7722 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 12:32:43.520: INFO: Initial restart count of pod liveness-e370570a-72f8-4196-8556-9783aa7b8a8d is 0 Aug 17 12:32:59.827: INFO: Restart count of pod container-probe-7722/liveness-e370570a-72f8-4196-8556-9783aa7b8a8d is now 1 (16.306874771s elapsed) Aug 17 12:33:22.460: INFO: Restart count of pod container-probe-7722/liveness-e370570a-72f8-4196-8556-9783aa7b8a8d is now 2 (38.939994327s elapsed) Aug 17 12:33:40.804: INFO: Restart count of pod container-probe-7722/liveness-e370570a-72f8-4196-8556-9783aa7b8a8d is now 3 (57.283450109s elapsed) Aug 17 12:34:01.857: INFO: Restart count of pod container-probe-7722/liveness-e370570a-72f8-4196-8556-9783aa7b8a8d is now 4 (1m18.337245611s elapsed) Aug 17 12:35:03.940: INFO: Restart count of pod container-probe-7722/liveness-e370570a-72f8-4196-8556-9783aa7b8a8d is now 5 (2m20.419649945s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:35:04.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7722" for this suite. • [SLOW TEST:147.868 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":178,"skipped":3068,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:35:04.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7450 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7450 I0817 12:35:05.798656 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7450, replica count: 2 I0817 12:35:08.850031 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:35:11.850836 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:35:14.851575 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 12:35:14.851: INFO: Creating new exec pod Aug 17 12:35:22.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7450 execpodf7zt8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 17 12:35:30.674: INFO: stderr: "I0817 12:35:30.557060 2429 log.go:181] (0x4000132370) (0x4000668000) Create stream\nI0817 12:35:30.561561 2429 log.go:181] (0x4000132370) (0x4000668000) Stream added, broadcasting: 1\nI0817 12:35:30.572402 2429 log.go:181] (0x4000132370) Reply frame received for 1\nI0817 12:35:30.573816 2429 log.go:181] (0x4000132370) (0x4000dbc000) Create stream\nI0817 12:35:30.573938 2429 log.go:181] (0x4000132370) (0x4000dbc000) Stream added, broadcasting: 3\nI0817 12:35:30.575914 2429 log.go:181] (0x4000132370) Reply frame received for 3\nI0817 12:35:30.576387 2429 log.go:181] (0x4000132370) (0x40006a8000) Create stream\nI0817 12:35:30.576497 2429 log.go:181] (0x4000132370) (0x40006a8000) Stream added, broadcasting: 5\nI0817 12:35:30.578261 2429 log.go:181] (0x4000132370) Reply frame received for 5\nI0817 12:35:30.655953 2429 log.go:181] (0x4000132370) Data frame received for 3\nI0817 12:35:30.656223 2429 log.go:181] (0x4000dbc000) (3) Data frame handling\nI0817 12:35:30.656432 2429 log.go:181] (0x4000132370) Data frame received for 5\nI0817 12:35:30.656535 2429 log.go:181] (0x40006a8000) (5) Data frame handling\nI0817 12:35:30.656711 2429 log.go:181] (0x4000132370) Data frame received for 1\nI0817 12:35:30.656920 2429 log.go:181] (0x4000668000) (1) Data frame handling\nI0817 12:35:30.658362 2429 log.go:181] (0x4000668000) (1) Data frame sent\nI0817 12:35:30.658598 2429 log.go:181] (0x40006a8000) (5) Data frame sent\nI0817 12:35:30.658672 2429 log.go:181] (0x4000132370) Data frame received for 5\nI0817 12:35:30.658731 2429 log.go:181] (0x40006a8000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0817 12:35:30.659835 2429 log.go:181] (0x40006a8000) (5) Data frame sent\nI0817 12:35:30.659994 2429 log.go:181] (0x4000132370) Data frame received for 5\nI0817 12:35:30.660063 2429 log.go:181] (0x40006a8000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0817 12:35:30.661307 2429 log.go:181] (0x4000132370) (0x4000668000) Stream removed, broadcasting: 1\nI0817 12:35:30.662499 2429 log.go:181] (0x4000132370) Go away received\nI0817 12:35:30.665300 2429 log.go:181] (0x4000132370) (0x4000668000) Stream removed, broadcasting: 1\nI0817 12:35:30.665570 2429 log.go:181] (0x4000132370) (0x4000dbc000) Stream removed, broadcasting: 3\nI0817 12:35:30.665744 2429 log.go:181] (0x4000132370) (0x40006a8000) Stream removed, broadcasting: 5\n" Aug 17 12:35:30.676: INFO: stdout: "" Aug 17 12:35:30.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7450 execpodf7zt8 -- /bin/sh -x -c nc -zv -t -w 2 10.104.157.208 80' Aug 17 12:35:32.349: INFO: stderr: "I0817 12:35:32.248858 2449 log.go:181] (0x400028a0b0) (0x4000832000) Create stream\nI0817 12:35:32.251740 2449 log.go:181] (0x400028a0b0) (0x4000832000) Stream added, broadcasting: 1\nI0817 12:35:32.262057 2449 log.go:181] (0x400028a0b0) Reply frame received for 1\nI0817 12:35:32.262587 2449 log.go:181] (0x400028a0b0) (0x4000c90000) Create stream\nI0817 12:35:32.262651 2449 log.go:181] (0x400028a0b0) (0x4000c90000) Stream added, broadcasting: 3\nI0817 12:35:32.264279 2449 log.go:181] (0x400028a0b0) Reply frame received for 3\nI0817 12:35:32.264570 2449 log.go:181] (0x400028a0b0) (0x40008320a0) Create stream\nI0817 12:35:32.264624 2449 log.go:181] (0x400028a0b0) (0x40008320a0) Stream added, broadcasting: 5\nI0817 12:35:32.265772 2449 log.go:181] (0x400028a0b0) Reply frame received for 5\nI0817 12:35:32.330372 2449 log.go:181] (0x400028a0b0) Data frame received for 5\nI0817 12:35:32.330584 2449 log.go:181] (0x40008320a0) (5) Data frame handling\nI0817 12:35:32.330726 2449 log.go:181] (0x400028a0b0) Data frame received for 3\nI0817 12:35:32.330809 2449 log.go:181] (0x4000c90000) (3) Data frame handling\nI0817 12:35:32.330896 2449 log.go:181] (0x400028a0b0) Data frame received for 1\nI0817 12:35:32.331031 2449 log.go:181] (0x4000832000) (1) Data frame handling\nI0817 12:35:32.331907 2449 log.go:181] (0x40008320a0) (5) Data frame sent\nI0817 12:35:32.332027 2449 log.go:181] (0x400028a0b0) Data frame received for 5\nI0817 12:35:32.332101 2449 log.go:181] (0x40008320a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.157.208 80\nConnection to 10.104.157.208 80 port [tcp/http] succeeded!\nI0817 12:35:32.332646 2449 log.go:181] (0x4000832000) (1) Data frame sent\nI0817 12:35:32.333748 2449 log.go:181] (0x400028a0b0) (0x4000832000) Stream removed, broadcasting: 1\nI0817 12:35:32.336247 2449 log.go:181] (0x400028a0b0) Go away received\nI0817 12:35:32.340816 2449 log.go:181] (0x400028a0b0) (0x4000832000) Stream removed, broadcasting: 1\nI0817 12:35:32.341064 2449 log.go:181] (0x400028a0b0) (0x4000c90000) Stream removed, broadcasting: 3\nI0817 12:35:32.341225 2449 log.go:181] (0x400028a0b0) (0x40008320a0) Stream removed, broadcasting: 5\n" Aug 17 12:35:32.349: INFO: stdout: "" Aug 17 12:35:32.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7450 execpodf7zt8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 32723' Aug 17 12:35:34.019: INFO: stderr: "I0817 12:35:33.915922 2469 log.go:181] (0x4000941290) (0x4000c1e640) Create stream\nI0817 12:35:33.919327 2469 log.go:181] (0x4000941290) (0x4000c1e640) Stream added, broadcasting: 1\nI0817 12:35:33.935276 2469 log.go:181] (0x4000941290) Reply frame received for 1\nI0817 12:35:33.935930 2469 log.go:181] (0x4000941290) (0x4000c1e000) Create stream\nI0817 12:35:33.935998 2469 log.go:181] (0x4000941290) (0x4000c1e000) Stream added, broadcasting: 3\nI0817 12:35:33.937529 2469 log.go:181] (0x4000941290) Reply frame received for 3\nI0817 12:35:33.937833 2469 log.go:181] (0x4000941290) (0x40003d8000) Create stream\nI0817 12:35:33.937935 2469 log.go:181] (0x4000941290) (0x40003d8000) Stream added, broadcasting: 5\nI0817 12:35:33.938975 2469 log.go:181] (0x4000941290) Reply frame received for 5\nI0817 12:35:33.995574 2469 log.go:181] (0x4000941290) Data frame received for 5\nI0817 12:35:33.995863 2469 log.go:181] (0x40003d8000) (5) Data frame handling\nI0817 12:35:33.996183 2469 log.go:181] (0x4000941290) Data frame received for 3\nI0817 12:35:33.996313 2469 log.go:181] (0x4000c1e000) (3) Data frame handling\nI0817 12:35:33.996641 2469 log.go:181] (0x40003d8000) (5) Data frame sent\nI0817 12:35:33.997067 2469 log.go:181] (0x4000941290) Data frame received for 5\nI0817 12:35:33.997209 2469 log.go:181] (0x4000941290) Data frame received for 1\nI0817 12:35:33.997354 2469 log.go:181] (0x4000c1e640) (1) Data frame handling\nI0817 12:35:33.997538 2469 log.go:181] (0x40003d8000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 32723\nI0817 12:35:33.997822 2469 log.go:181] (0x4000c1e640) (1) Data frame sent\nI0817 12:35:33.999824 2469 log.go:181] (0x40003d8000) (5) Data frame sent\nConnection to 172.18.0.11 32723 port [tcp/32723] succeeded!\nI0817 12:35:33.999929 2469 log.go:181] (0x4000941290) Data frame received for 5\nI0817 12:35:34.000079 2469 log.go:181] (0x40003d8000) (5) Data frame handling\nI0817 12:35:34.001268 2469 log.go:181] (0x4000941290) (0x4000c1e640) Stream removed, broadcasting: 1\nI0817 12:35:34.003943 2469 log.go:181] (0x4000941290) Go away received\nI0817 12:35:34.006948 2469 log.go:181] (0x4000941290) (0x4000c1e640) Stream removed, broadcasting: 1\nI0817 12:35:34.007368 2469 log.go:181] (0x4000941290) (0x4000c1e000) Stream removed, broadcasting: 3\nI0817 12:35:34.007679 2469 log.go:181] (0x4000941290) (0x40003d8000) Stream removed, broadcasting: 5\n" Aug 17 12:35:34.020: INFO: stdout: "" Aug 17 12:35:34.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7450 execpodf7zt8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32723' Aug 17 12:35:35.875: INFO: stderr: "I0817 12:35:35.748847 2489 log.go:181] (0x400032af20) (0x40007223c0) Create stream\nI0817 12:35:35.751952 2489 log.go:181] (0x400032af20) (0x40007223c0) Stream added, broadcasting: 1\nI0817 12:35:35.765218 2489 log.go:181] (0x400032af20) Reply frame received for 1\nI0817 12:35:35.766316 2489 log.go:181] (0x400032af20) (0x4000722460) Create stream\nI0817 12:35:35.766417 2489 log.go:181] (0x400032af20) (0x4000722460) Stream added, broadcasting: 3\nI0817 12:35:35.768151 2489 log.go:181] (0x400032af20) Reply frame received for 3\nI0817 12:35:35.768384 2489 log.go:181] (0x400032af20) (0x4000722500) Create stream\nI0817 12:35:35.768447 2489 log.go:181] (0x400032af20) (0x4000722500) Stream added, broadcasting: 5\nI0817 12:35:35.769711 2489 log.go:181] (0x400032af20) Reply frame received for 5\nI0817 12:35:35.849740 2489 log.go:181] (0x400032af20) Data frame received for 5\nI0817 12:35:35.850769 2489 log.go:181] (0x400032af20) Data frame received for 3\nI0817 12:35:35.851183 2489 log.go:181] (0x400032af20) Data frame received for 1\nI0817 12:35:35.851396 2489 log.go:181] (0x40007223c0) (1) Data frame handling\nI0817 12:35:35.851487 2489 log.go:181] (0x4000722500) (5) Data frame handling\nI0817 12:35:35.851734 2489 log.go:181] (0x4000722460) (3) Data frame handling\nI0817 12:35:35.853700 2489 log.go:181] (0x4000722500) (5) Data frame sent\nI0817 12:35:35.854865 2489 log.go:181] (0x40007223c0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 32723\nConnection to 172.18.0.14 32723 port [tcp/32723] succeeded!\nI0817 12:35:35.856339 2489 log.go:181] (0x400032af20) Data frame received for 5\nI0817 12:35:35.856442 2489 log.go:181] (0x4000722500) (5) Data frame handling\nI0817 12:35:35.858217 2489 log.go:181] (0x400032af20) (0x40007223c0) Stream removed, broadcasting: 1\nI0817 12:35:35.858908 2489 log.go:181] (0x400032af20) Go away received\nI0817 12:35:35.861374 2489 log.go:181] (0x400032af20) (0x40007223c0) Stream removed, broadcasting: 1\nI0817 12:35:35.861653 2489 log.go:181] (0x400032af20) (0x4000722460) Stream removed, broadcasting: 3\nI0817 12:35:35.862001 2489 log.go:181] (0x400032af20) (0x4000722500) Stream removed, broadcasting: 5\n" Aug 17 12:35:35.876: INFO: stdout: "" Aug 17 12:35:35.876: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:35:36.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7450" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:31.371 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":179,"skipped":3072,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:35:36.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-f86eaab3-2d2e-4a0c-ab63-113ffeb20cfd STEP: Creating a pod to test consume configMaps Aug 17 12:35:36.395: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5" in namespace "configmap-6024" to be "Succeeded or Failed" Aug 17 12:35:36.449: INFO: Pod "pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 53.337572ms Aug 17 12:35:38.455: INFO: Pod "pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059866595s Aug 17 12:35:40.463: INFO: Pod "pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067422586s Aug 17 12:35:42.798: INFO: Pod "pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5": Phase="Running", Reason="", readiness=true. Elapsed: 6.403229217s Aug 17 12:35:47.067: INFO: Pod "pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.671808511s STEP: Saw pod success Aug 17 12:35:47.067: INFO: Pod "pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5" satisfied condition "Succeeded or Failed" Aug 17 12:35:47.316: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5 container configmap-volume-test: STEP: delete the pod Aug 17 12:35:48.324: INFO: Waiting for pod pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5 to disappear Aug 17 12:35:48.339: INFO: Pod pod-configmaps-a2df6a74-a725-4bd4-a430-6d893e990bf5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:35:48.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6024" for this suite. • [SLOW TEST:12.254 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":180,"skipped":3104,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:35:48.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-885 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-885 STEP: Deleting pre-stop pod Aug 17 12:36:05.967: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:36:05.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-885" for this suite. • [SLOW TEST:17.664 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":181,"skipped":3115,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:36:06.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 17 12:36:15.365: INFO: Successfully updated pod "pod-update-b0f4a64a-478a-48cb-9990-2747b5eeea3d" STEP: verifying the updated pod is in kubernetes Aug 17 12:36:15.386: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:36:15.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5143" for this suite. • [SLOW TEST:9.351 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":182,"skipped":3115,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:36:15.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 17 12:36:15.641: INFO: Waiting up to 5m0s for pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15" in namespace "emptydir-9427" to be "Succeeded or Failed" Aug 17 12:36:15.652: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Pending", Reason="", readiness=false. Elapsed: 10.924383ms Aug 17 12:36:17.937: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29526706s Aug 17 12:36:20.264: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.622995151s Aug 17 12:36:23.103: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Pending", Reason="", readiness=false. Elapsed: 7.461793186s Aug 17 12:36:25.199: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Pending", Reason="", readiness=false. Elapsed: 9.557416614s Aug 17 12:36:27.450: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Pending", Reason="", readiness=false. Elapsed: 11.808099223s Aug 17 12:36:30.121: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Pending", Reason="", readiness=false. Elapsed: 14.48002555s Aug 17 12:36:32.438: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Pending", Reason="", readiness=false. Elapsed: 16.796233838s Aug 17 12:36:34.456: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Pending", Reason="", readiness=false. Elapsed: 18.814690555s Aug 17 12:36:36.516: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Running", Reason="", readiness=true. Elapsed: 20.874913679s Aug 17 12:36:38.584: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.942618586s STEP: Saw pod success Aug 17 12:36:38.585: INFO: Pod "pod-cee590b8-c637-4331-b2ae-e6e61067ad15" satisfied condition "Succeeded or Failed" Aug 17 12:36:38.590: INFO: Trying to get logs from node latest-worker pod pod-cee590b8-c637-4331-b2ae-e6e61067ad15 container test-container: STEP: delete the pod Aug 17 12:36:38.942: INFO: Waiting for pod pod-cee590b8-c637-4331-b2ae-e6e61067ad15 to disappear Aug 17 12:36:39.034: INFO: Pod pod-cee590b8-c637-4331-b2ae-e6e61067ad15 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:36:39.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9427" for this suite. • [SLOW TEST:23.796 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":183,"skipped":3122,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:36:39.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-038f80f3-9585-4887-be50-dc17c8085461 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:36:59.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3534" for this suite. • [SLOW TEST:21.630 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":184,"skipped":3164,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:37:00.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 17 12:37:03.404: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:37:39.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7583" for this suite. • [SLOW TEST:38.873 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":185,"skipped":3192,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:37:39.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 17 12:37:52.142: INFO: Successfully updated pod "adopt-release-2dsll" STEP: Checking that the Job readopts the Pod Aug 17 12:37:52.142: INFO: Waiting up to 15m0s for pod "adopt-release-2dsll" in namespace "job-4913" to be "adopted" Aug 17 12:37:52.400: INFO: Pod "adopt-release-2dsll": Phase="Running", Reason="", readiness=true. Elapsed: 257.617434ms Aug 17 12:37:54.407: INFO: Pod "adopt-release-2dsll": Phase="Running", Reason="", readiness=true. Elapsed: 2.264573964s Aug 17 12:37:54.408: INFO: Pod "adopt-release-2dsll" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 17 12:37:55.019: INFO: Successfully updated pod "adopt-release-2dsll" STEP: Checking that the Job releases the Pod Aug 17 12:37:55.019: INFO: Waiting up to 15m0s for pod "adopt-release-2dsll" in namespace "job-4913" to be "released" Aug 17 12:37:55.322: INFO: Pod "adopt-release-2dsll": Phase="Running", Reason="", readiness=true. Elapsed: 303.05755ms Aug 17 12:37:57.488: INFO: Pod "adopt-release-2dsll": Phase="Running", Reason="", readiness=true. Elapsed: 2.468645814s Aug 17 12:37:57.488: INFO: Pod "adopt-release-2dsll" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:37:57.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4913" for this suite. • [SLOW TEST:18.115 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":186,"skipped":3195,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:37:57.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 17 12:37:59.066: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:38:19.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7894" for this suite. • [SLOW TEST:22.006 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":187,"skipped":3219,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} S ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:38:19.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:38:20.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2699" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":188,"skipped":3220,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:38:20.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:38:20.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2849" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":189,"skipped":3224,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:38:20.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:38:21.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-25" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":190,"skipped":3230,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:38:21.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:38:28.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7583" for this suite. • [SLOW TEST:6.632 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":191,"skipped":3256,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:38:28.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 17 12:38:29.033: INFO: Waiting up to 1m0s for all nodes to be ready Aug 17 12:39:29.110: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 17 12:39:29.164: INFO: Created pod: pod0-sched-preemption-low-priority Aug 17 12:39:29.279: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:39:55.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6279" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:87.460 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":192,"skipped":3271,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:39:55.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:39:58.140: INFO: Checking APIGroup: apiregistration.k8s.io Aug 17 12:39:58.143: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Aug 17 12:39:58.143: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.144: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Aug 17 12:39:58.144: INFO: Checking APIGroup: extensions Aug 17 12:39:58.146: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Aug 17 12:39:58.146: INFO: Versions found [{extensions/v1beta1 v1beta1}] Aug 17 12:39:58.146: INFO: extensions/v1beta1 matches extensions/v1beta1 Aug 17 12:39:58.146: INFO: Checking APIGroup: apps Aug 17 12:39:58.148: INFO: PreferredVersion.GroupVersion: apps/v1 Aug 17 12:39:58.148: INFO: Versions found [{apps/v1 v1}] Aug 17 12:39:58.148: INFO: apps/v1 matches apps/v1 Aug 17 12:39:58.148: INFO: Checking APIGroup: events.k8s.io Aug 17 12:39:58.150: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Aug 17 12:39:58.150: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.150: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Aug 17 12:39:58.150: INFO: Checking APIGroup: authentication.k8s.io Aug 17 12:39:58.151: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Aug 17 12:39:58.151: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.152: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Aug 17 12:39:58.152: INFO: Checking APIGroup: authorization.k8s.io Aug 17 12:39:58.153: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Aug 17 12:39:58.153: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.153: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Aug 17 12:39:58.153: INFO: Checking APIGroup: autoscaling Aug 17 12:39:58.155: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Aug 17 12:39:58.156: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Aug 17 12:39:58.156: INFO: autoscaling/v1 matches autoscaling/v1 Aug 17 12:39:58.156: INFO: Checking APIGroup: batch Aug 17 12:39:58.157: INFO: PreferredVersion.GroupVersion: batch/v1 Aug 17 12:39:58.157: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Aug 17 12:39:58.157: INFO: batch/v1 matches batch/v1 Aug 17 12:39:58.157: INFO: Checking APIGroup: certificates.k8s.io Aug 17 12:39:58.158: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Aug 17 12:39:58.158: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.158: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Aug 17 12:39:58.158: INFO: Checking APIGroup: networking.k8s.io Aug 17 12:39:58.159: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Aug 17 12:39:58.159: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.159: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Aug 17 12:39:58.159: INFO: Checking APIGroup: policy Aug 17 12:39:58.161: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Aug 17 12:39:58.161: INFO: Versions found [{policy/v1beta1 v1beta1}] Aug 17 12:39:58.161: INFO: policy/v1beta1 matches policy/v1beta1 Aug 17 12:39:58.161: INFO: Checking APIGroup: rbac.authorization.k8s.io Aug 17 12:39:58.165: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Aug 17 12:39:58.165: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.165: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Aug 17 12:39:58.165: INFO: Checking APIGroup: storage.k8s.io Aug 17 12:39:58.167: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Aug 17 12:39:58.167: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.167: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Aug 17 12:39:58.167: INFO: Checking APIGroup: admissionregistration.k8s.io Aug 17 12:39:58.169: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Aug 17 12:39:58.169: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.169: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Aug 17 12:39:58.169: INFO: Checking APIGroup: apiextensions.k8s.io Aug 17 12:39:58.171: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Aug 17 12:39:58.171: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.171: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Aug 17 12:39:58.171: INFO: Checking APIGroup: scheduling.k8s.io Aug 17 12:39:58.173: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Aug 17 12:39:58.173: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.173: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Aug 17 12:39:58.173: INFO: Checking APIGroup: coordination.k8s.io Aug 17 12:39:58.175: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Aug 17 12:39:58.175: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.175: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Aug 17 12:39:58.175: INFO: Checking APIGroup: node.k8s.io Aug 17 12:39:58.177: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Aug 17 12:39:58.177: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.178: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Aug 17 12:39:58.178: INFO: Checking APIGroup: discovery.k8s.io Aug 17 12:39:58.179: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Aug 17 12:39:58.179: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Aug 17 12:39:58.179: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:39:58.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-7121" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":193,"skipped":3274,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:39:58.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 12:40:02.302: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-cbccbf6bb\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Aug 17 12:40:04.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:40:06.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733264802, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:40:09.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:40:12.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9602" for this suite. STEP: Destroying namespace "webhook-9602-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.396 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":194,"skipped":3301,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:40:12.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 12:40:13.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f4bb3f3-ae2d-4b38-b326-3369c218bb9e" in namespace "projected-9378" to be "Succeeded or Failed" Aug 17 12:40:13.350: INFO: Pod "downwardapi-volume-6f4bb3f3-ae2d-4b38-b326-3369c218bb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 251.783744ms Aug 17 12:40:15.377: INFO: Pod "downwardapi-volume-6f4bb3f3-ae2d-4b38-b326-3369c218bb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27905635s Aug 17 12:40:17.545: INFO: Pod "downwardapi-volume-6f4bb3f3-ae2d-4b38-b326-3369c218bb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447546503s Aug 17 12:40:19.552: INFO: Pod "downwardapi-volume-6f4bb3f3-ae2d-4b38-b326-3369c218bb9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.453774102s STEP: Saw pod success Aug 17 12:40:19.552: INFO: Pod "downwardapi-volume-6f4bb3f3-ae2d-4b38-b326-3369c218bb9e" satisfied condition "Succeeded or Failed" Aug 17 12:40:19.556: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6f4bb3f3-ae2d-4b38-b326-3369c218bb9e container client-container: STEP: delete the pod Aug 17 12:40:19.722: INFO: Waiting for pod downwardapi-volume-6f4bb3f3-ae2d-4b38-b326-3369c218bb9e to disappear Aug 17 12:40:19.753: INFO: Pod downwardapi-volume-6f4bb3f3-ae2d-4b38-b326-3369c218bb9e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:40:19.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9378" for this suite. • [SLOW TEST:7.247 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":3315,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:40:19.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:40:20.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5098" for this suite. STEP: Destroying namespace "nspatchtest-edace2ff-b7a3-499b-8e98-0ff96fdcd90e-289" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":196,"skipped":3327,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:40:20.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 17 12:40:20.400: INFO: Waiting up to 5m0s for pod "pod-e475bf47-f0b4-41c0-be65-397dbd5f622e" in namespace "emptydir-8038" to be "Succeeded or Failed" Aug 17 12:40:20.501: INFO: Pod "pod-e475bf47-f0b4-41c0-be65-397dbd5f622e": Phase="Pending", Reason="", readiness=false. Elapsed: 101.429648ms Aug 17 12:40:22.508: INFO: Pod "pod-e475bf47-f0b4-41c0-be65-397dbd5f622e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10792934s Aug 17 12:40:24.923: INFO: Pod "pod-e475bf47-f0b4-41c0-be65-397dbd5f622e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52320907s Aug 17 12:40:27.270: INFO: Pod "pod-e475bf47-f0b4-41c0-be65-397dbd5f622e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.870510588s STEP: Saw pod success Aug 17 12:40:27.271: INFO: Pod "pod-e475bf47-f0b4-41c0-be65-397dbd5f622e" satisfied condition "Succeeded or Failed" Aug 17 12:40:27.665: INFO: Trying to get logs from node latest-worker2 pod pod-e475bf47-f0b4-41c0-be65-397dbd5f622e container test-container: STEP: delete the pod Aug 17 12:40:27.905: INFO: Waiting for pod pod-e475bf47-f0b4-41c0-be65-397dbd5f622e to disappear Aug 17 12:40:27.942: INFO: Pod pod-e475bf47-f0b4-41c0-be65-397dbd5f622e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:40:27.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8038" for this suite. • [SLOW TEST:8.137 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":197,"skipped":3342,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:40:28.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:40:42.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2579" for this suite. • [SLOW TEST:14.456 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":198,"skipped":3362,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:40:42.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-6700 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6700 to expose endpoints map[] Aug 17 12:40:43.415: INFO: successfully validated that service multi-endpoint-test in namespace services-6700 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6700 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6700 to expose endpoints map[pod1:[100]] Aug 17 12:40:47.543: INFO: successfully validated that service multi-endpoint-test in namespace services-6700 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-6700 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6700 to expose endpoints map[pod1:[100] pod2:[101]] Aug 17 12:40:51.925: INFO: successfully validated that service multi-endpoint-test in namespace services-6700 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-6700 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6700 to expose endpoints map[pod2:[101]] Aug 17 12:40:52.945: INFO: successfully validated that service multi-endpoint-test in namespace services-6700 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-6700 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6700 to expose endpoints map[] Aug 17 12:40:53.146: INFO: successfully validated that service multi-endpoint-test in namespace services-6700 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:40:53.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6700" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.940 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":199,"skipped":3438,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:40:53.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1961 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1961 STEP: creating replication controller externalsvc in namespace services-1961 I0817 12:40:54.976411 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1961, replica count: 2 I0817 12:40:58.027897 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:41:01.028656 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 17 12:41:01.070: INFO: Creating new exec pod Aug 17 12:41:13.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1961 execpod4q84h -- /bin/sh -x -c nslookup clusterip-service.services-1961.svc.cluster.local' Aug 17 12:41:16.054: INFO: stderr: "I0817 12:41:15.942437 2509 log.go:181] (0x40006b4c60) (0x40005db180) Create stream\nI0817 12:41:15.947935 2509 log.go:181] (0x40006b4c60) (0x40005db180) Stream added, broadcasting: 1\nI0817 12:41:15.962708 2509 log.go:181] (0x40006b4c60) Reply frame received for 1\nI0817 12:41:15.963905 2509 log.go:181] (0x40006b4c60) (0x4000282000) Create stream\nI0817 12:41:15.964129 2509 log.go:181] (0x40006b4c60) (0x4000282000) Stream added, broadcasting: 3\nI0817 12:41:15.966502 2509 log.go:181] (0x40006b4c60) Reply frame received for 3\nI0817 12:41:15.967186 2509 log.go:181] (0x40006b4c60) (0x40005db860) Create stream\nI0817 12:41:15.967344 2509 log.go:181] (0x40006b4c60) (0x40005db860) Stream added, broadcasting: 5\nI0817 12:41:15.969271 2509 log.go:181] (0x40006b4c60) Reply frame received for 5\nI0817 12:41:16.024107 2509 log.go:181] (0x40006b4c60) Data frame received for 5\nI0817 12:41:16.024372 2509 log.go:181] (0x40005db860) (5) Data frame handling\nI0817 12:41:16.025031 2509 log.go:181] (0x40005db860) (5) Data frame sent\n+ nslookup clusterip-service.services-1961.svc.cluster.local\nI0817 12:41:16.031841 2509 log.go:181] (0x40006b4c60) Data frame received for 3\nI0817 12:41:16.031982 2509 log.go:181] (0x4000282000) (3) Data frame handling\nI0817 12:41:16.032149 2509 log.go:181] (0x4000282000) (3) Data frame sent\nI0817 12:41:16.032623 2509 log.go:181] (0x40006b4c60) Data frame received for 3\nI0817 12:41:16.032822 2509 log.go:181] (0x4000282000) (3) Data frame handling\nI0817 12:41:16.032962 2509 log.go:181] (0x4000282000) (3) Data frame sent\nI0817 12:41:16.033557 2509 log.go:181] (0x40006b4c60) Data frame received for 3\nI0817 12:41:16.033702 2509 log.go:181] (0x4000282000) (3) Data frame handling\nI0817 12:41:16.034425 2509 log.go:181] (0x40006b4c60) Data frame received for 5\nI0817 12:41:16.034568 2509 log.go:181] (0x40005db860) (5) Data frame handling\nI0817 12:41:16.035512 2509 log.go:181] (0x40006b4c60) Data frame received for 1\nI0817 12:41:16.035616 2509 log.go:181] (0x40005db180) (1) Data frame handling\nI0817 12:41:16.035728 2509 log.go:181] (0x40005db180) (1) Data frame sent\nI0817 12:41:16.036409 2509 log.go:181] (0x40006b4c60) (0x40005db180) Stream removed, broadcasting: 1\nI0817 12:41:16.040897 2509 log.go:181] (0x40006b4c60) Go away received\nI0817 12:41:16.043870 2509 log.go:181] (0x40006b4c60) (0x40005db180) Stream removed, broadcasting: 1\nI0817 12:41:16.044574 2509 log.go:181] (0x40006b4c60) (0x4000282000) Stream removed, broadcasting: 3\nI0817 12:41:16.044957 2509 log.go:181] (0x40006b4c60) (0x40005db860) Stream removed, broadcasting: 5\n" Aug 17 12:41:16.055: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1961.svc.cluster.local\tcanonical name = externalsvc.services-1961.svc.cluster.local.\nName:\texternalsvc.services-1961.svc.cluster.local\nAddress: 10.102.116.101\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1961, will wait for the garbage collector to delete the pods Aug 17 12:41:16.119: INFO: Deleting ReplicationController externalsvc took: 7.41726ms Aug 17 12:41:17.120: INFO: Terminating ReplicationController externalsvc pods took: 1.000660428s Aug 17 12:41:31.384: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:41:32.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1961" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:38.902 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":200,"skipped":3455,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:41:32.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:41:32.993: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:41:39.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3478" for this suite. • [SLOW TEST:6.856 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":201,"skipped":3463,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:41:39.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:41:39.692: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:41:46.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-405" for this suite. • [SLOW TEST:6.449 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":202,"skipped":3470,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} S ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:41:46.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:43:46.475: INFO: Deleting pod "var-expansion-e7e8ac38-6124-4c91-bff9-145e6a315a38" in namespace "var-expansion-7358" Aug 17 12:43:46.487: INFO: Wait up to 5m0s for pod "var-expansion-e7e8ac38-6124-4c91-bff9-145e6a315a38" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:43:48.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7358" for this suite. • [SLOW TEST:122.746 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":203,"skipped":3471,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:43:48.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-001f09c8-e001-430d-ae63-bc76ff7b9420 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-001f09c8-e001-430d-ae63-bc76ff7b9420 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:45:01.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5946" for this suite. • [SLOW TEST:73.238 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":204,"skipped":3475,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:45:02.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-f826939c-5666-4d0a-a850-ba27b9625809 STEP: Creating a pod to test consume configMaps Aug 17 12:45:02.821: INFO: Waiting up to 5m0s for pod "pod-configmaps-e54b1795-e0e2-4020-a81f-5289a65b5a29" in namespace "configmap-9212" to be "Succeeded or Failed" Aug 17 12:45:02.914: INFO: Pod "pod-configmaps-e54b1795-e0e2-4020-a81f-5289a65b5a29": Phase="Pending", Reason="", readiness=false. Elapsed: 91.971889ms Aug 17 12:45:04.921: INFO: Pod "pod-configmaps-e54b1795-e0e2-4020-a81f-5289a65b5a29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09910986s Aug 17 12:45:06.943: INFO: Pod "pod-configmaps-e54b1795-e0e2-4020-a81f-5289a65b5a29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121841923s Aug 17 12:45:09.086: INFO: Pod "pod-configmaps-e54b1795-e0e2-4020-a81f-5289a65b5a29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264284598s STEP: Saw pod success Aug 17 12:45:09.086: INFO: Pod "pod-configmaps-e54b1795-e0e2-4020-a81f-5289a65b5a29" satisfied condition "Succeeded or Failed" Aug 17 12:45:09.090: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e54b1795-e0e2-4020-a81f-5289a65b5a29 container configmap-volume-test: STEP: delete the pod Aug 17 12:45:09.693: INFO: Waiting for pod pod-configmaps-e54b1795-e0e2-4020-a81f-5289a65b5a29 to disappear Aug 17 12:45:09.901: INFO: Pod pod-configmaps-e54b1795-e0e2-4020-a81f-5289a65b5a29 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:45:09.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9212" for this suite. • [SLOW TEST:8.174 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":205,"skipped":3497,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:45:10.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7901 STEP: creating service affinity-nodeport-transition in namespace services-7901 STEP: creating replication controller affinity-nodeport-transition in namespace services-7901 I0817 12:45:11.387433 10 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-7901, replica count: 3 I0817 12:45:14.439014 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:45:17.439493 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:45:20.440145 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 12:45:20.460: INFO: Creating new exec pod Aug 17 12:45:27.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7901 execpod-affinityvmvqm -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Aug 17 12:45:31.295: INFO: stderr: "I0817 12:45:31.163184 2529 log.go:181] (0x40003140b0) (0x4000b50000) Create stream\nI0817 12:45:31.167320 2529 log.go:181] (0x40003140b0) (0x4000b50000) Stream added, broadcasting: 1\nI0817 12:45:31.180553 2529 log.go:181] (0x40003140b0) Reply frame received for 1\nI0817 12:45:31.181674 2529 log.go:181] (0x40003140b0) (0x40004af400) Create stream\nI0817 12:45:31.181759 2529 log.go:181] (0x40003140b0) (0x40004af400) Stream added, broadcasting: 3\nI0817 12:45:31.183198 2529 log.go:181] (0x40003140b0) Reply frame received for 3\nI0817 12:45:31.183468 2529 log.go:181] (0x40003140b0) (0x4000912000) Create stream\nI0817 12:45:31.183527 2529 log.go:181] (0x40003140b0) (0x4000912000) Stream added, broadcasting: 5\nI0817 12:45:31.184814 2529 log.go:181] (0x40003140b0) Reply frame received for 5\nI0817 12:45:31.279561 2529 log.go:181] (0x40003140b0) Data frame received for 3\nI0817 12:45:31.279926 2529 log.go:181] (0x40003140b0) Data frame received for 5\nI0817 12:45:31.280047 2529 log.go:181] (0x4000912000) (5) Data frame handling\nI0817 12:45:31.280211 2529 log.go:181] (0x40003140b0) Data frame received for 1\nI0817 12:45:31.280282 2529 log.go:181] (0x4000b50000) (1) Data frame handling\nI0817 12:45:31.280434 2529 log.go:181] (0x40004af400) (3) Data frame handling\nI0817 12:45:31.282240 2529 log.go:181] (0x4000b50000) (1) Data frame sent\nI0817 12:45:31.284364 2529 log.go:181] (0x40003140b0) (0x4000b50000) Stream removed, broadcasting: 1\nI0817 12:45:31.284471 2529 log.go:181] (0x4000912000) (5) Data frame sent\nI0817 12:45:31.284538 2529 log.go:181] (0x40003140b0) Data frame received for 5\nI0817 12:45:31.284588 2529 log.go:181] (0x4000912000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0817 12:45:31.285330 2529 log.go:181] (0x40003140b0) Go away received\nI0817 12:45:31.289005 2529 log.go:181] (0x40003140b0) (0x4000b50000) Stream removed, broadcasting: 1\nI0817 12:45:31.289416 2529 log.go:181] (0x40003140b0) (0x40004af400) Stream removed, broadcasting: 3\nI0817 12:45:31.289614 2529 log.go:181] (0x40003140b0) (0x4000912000) Stream removed, broadcasting: 5\n" Aug 17 12:45:31.297: INFO: stdout: "" Aug 17 12:45:31.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7901 execpod-affinityvmvqm -- /bin/sh -x -c nc -zv -t -w 2 10.110.213.124 80' Aug 17 12:45:35.723: INFO: stderr: "I0817 12:45:35.635257 2550 log.go:181] (0x400003ad10) (0x4000382460) Create stream\nI0817 12:45:35.637538 2550 log.go:181] (0x400003ad10) (0x4000382460) Stream added, broadcasting: 1\nI0817 12:45:35.648398 2550 log.go:181] (0x400003ad10) Reply frame received for 1\nI0817 12:45:35.648950 2550 log.go:181] (0x400003ad10) (0x4000466000) Create stream\nI0817 12:45:35.649004 2550 log.go:181] (0x400003ad10) (0x4000466000) Stream added, broadcasting: 3\nI0817 12:45:35.650289 2550 log.go:181] (0x400003ad10) Reply frame received for 3\nI0817 12:45:35.650598 2550 log.go:181] (0x400003ad10) (0x400011a820) Create stream\nI0817 12:45:35.650669 2550 log.go:181] (0x400003ad10) (0x400011a820) Stream added, broadcasting: 5\nI0817 12:45:35.651649 2550 log.go:181] (0x400003ad10) Reply frame received for 5\nI0817 12:45:35.701333 2550 log.go:181] (0x400003ad10) Data frame received for 3\nI0817 12:45:35.702004 2550 log.go:181] (0x4000466000) (3) Data frame handling\nI0817 12:45:35.702159 2550 log.go:181] (0x400003ad10) Data frame received for 1\nI0817 12:45:35.702603 2550 log.go:181] (0x4000382460) (1) Data frame handling\nI0817 12:45:35.702886 2550 log.go:181] (0x400003ad10) Data frame received for 5\nI0817 12:45:35.702963 2550 log.go:181] (0x400011a820) (5) Data frame handling\nI0817 12:45:35.704353 2550 log.go:181] (0x400011a820) (5) Data frame sent\nI0817 12:45:35.704499 2550 log.go:181] (0x400003ad10) Data frame received for 5\nI0817 12:45:35.704561 2550 log.go:181] (0x400011a820) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.213.124 80\nConnection to 10.110.213.124 80 port [tcp/http] succeeded!\nI0817 12:45:35.705253 2550 log.go:181] (0x4000382460) (1) Data frame sent\nI0817 12:45:35.706714 2550 log.go:181] (0x400003ad10) (0x4000382460) Stream removed, broadcasting: 1\nI0817 12:45:35.708851 2550 log.go:181] (0x400003ad10) Go away received\nI0817 12:45:35.712150 2550 log.go:181] (0x400003ad10) (0x4000382460) Stream removed, broadcasting: 1\nI0817 12:45:35.712641 2550 log.go:181] (0x400003ad10) (0x4000466000) Stream removed, broadcasting: 3\nI0817 12:45:35.713054 2550 log.go:181] (0x400003ad10) (0x400011a820) Stream removed, broadcasting: 5\n" Aug 17 12:45:35.724: INFO: stdout: "" Aug 17 12:45:35.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7901 execpod-affinityvmvqm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31543' Aug 17 12:45:37.436: INFO: stderr: "I0817 12:45:37.344991 2571 log.go:181] (0x4000af8000) (0x40006ae000) Create stream\nI0817 12:45:37.347270 2571 log.go:181] (0x4000af8000) (0x40006ae000) Stream added, broadcasting: 1\nI0817 12:45:37.360670 2571 log.go:181] (0x4000af8000) Reply frame received for 1\nI0817 12:45:37.361680 2571 log.go:181] (0x4000af8000) (0x4000488d20) Create stream\nI0817 12:45:37.361759 2571 log.go:181] (0x4000af8000) (0x4000488d20) Stream added, broadcasting: 3\nI0817 12:45:37.363193 2571 log.go:181] (0x4000af8000) Reply frame received for 3\nI0817 12:45:37.363586 2571 log.go:181] (0x4000af8000) (0x400079e000) Create stream\nI0817 12:45:37.363667 2571 log.go:181] (0x4000af8000) (0x400079e000) Stream added, broadcasting: 5\nI0817 12:45:37.364881 2571 log.go:181] (0x4000af8000) Reply frame received for 5\nI0817 12:45:37.415938 2571 log.go:181] (0x4000af8000) Data frame received for 5\nI0817 12:45:37.416524 2571 log.go:181] (0x400079e000) (5) Data frame handling\nI0817 12:45:37.416956 2571 log.go:181] (0x4000af8000) Data frame received for 3\nI0817 12:45:37.417096 2571 log.go:181] (0x4000488d20) (3) Data frame handling\nI0817 12:45:37.417287 2571 log.go:181] (0x4000af8000) Data frame received for 1\nI0817 12:45:37.417364 2571 log.go:181] (0x40006ae000) (1) Data frame handling\nI0817 12:45:37.418669 2571 log.go:181] (0x40006ae000) (1) Data frame sent\nI0817 12:45:37.418876 2571 log.go:181] (0x400079e000) (5) Data frame sent\nI0817 12:45:37.419076 2571 log.go:181] (0x4000af8000) Data frame received for 5\nI0817 12:45:37.419146 2571 log.go:181] (0x400079e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31543\nI0817 12:45:37.420134 2571 log.go:181] (0x4000af8000) (0x40006ae000) Stream removed, broadcasting: 1\nConnection to 172.18.0.11 31543 port [tcp/31543] succeeded!\nI0817 12:45:37.421320 2571 log.go:181] (0x400079e000) (5) Data frame sent\nI0817 12:45:37.421413 2571 log.go:181] (0x4000af8000) Data frame received for 5\nI0817 12:45:37.421459 2571 log.go:181] (0x400079e000) (5) Data frame handling\nI0817 12:45:37.423176 2571 log.go:181] (0x4000af8000) Go away received\nI0817 12:45:37.426554 2571 log.go:181] (0x4000af8000) (0x40006ae000) Stream removed, broadcasting: 1\nI0817 12:45:37.426846 2571 log.go:181] (0x4000af8000) (0x4000488d20) Stream removed, broadcasting: 3\nI0817 12:45:37.427044 2571 log.go:181] (0x4000af8000) (0x400079e000) Stream removed, broadcasting: 5\n" Aug 17 12:45:37.438: INFO: stdout: "" Aug 17 12:45:37.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7901 execpod-affinityvmvqm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31543' Aug 17 12:45:39.133: INFO: stderr: "I0817 12:45:39.036598 2592 log.go:181] (0x40006b7ad0) (0x40006ae000) Create stream\nI0817 12:45:39.039032 2592 log.go:181] (0x40006b7ad0) (0x40006ae000) Stream added, broadcasting: 1\nI0817 12:45:39.046821 2592 log.go:181] (0x40006b7ad0) Reply frame received for 1\nI0817 12:45:39.047377 2592 log.go:181] (0x40006b7ad0) (0x4000642000) Create stream\nI0817 12:45:39.047429 2592 log.go:181] (0x40006b7ad0) (0x4000642000) Stream added, broadcasting: 3\nI0817 12:45:39.048381 2592 log.go:181] (0x40006b7ad0) Reply frame received for 3\nI0817 12:45:39.048585 2592 log.go:181] (0x40006b7ad0) (0x40006ae0a0) Create stream\nI0817 12:45:39.048630 2592 log.go:181] (0x40006b7ad0) (0x40006ae0a0) Stream added, broadcasting: 5\nI0817 12:45:39.049848 2592 log.go:181] (0x40006b7ad0) Reply frame received for 5\nI0817 12:45:39.117648 2592 log.go:181] (0x40006b7ad0) Data frame received for 3\nI0817 12:45:39.117869 2592 log.go:181] (0x4000642000) (3) Data frame handling\nI0817 12:45:39.118036 2592 log.go:181] (0x40006b7ad0) Data frame received for 5\nI0817 12:45:39.118115 2592 log.go:181] (0x40006ae0a0) (5) Data frame handling\nI0817 12:45:39.118178 2592 log.go:181] (0x40006b7ad0) Data frame received for 1\nI0817 12:45:39.118241 2592 log.go:181] (0x40006ae000) (1) Data frame handling\nI0817 12:45:39.119596 2592 log.go:181] (0x40006ae000) (1) Data frame sent\nI0817 12:45:39.119659 2592 log.go:181] (0x40006ae0a0) (5) Data frame sent\nI0817 12:45:39.120048 2592 log.go:181] (0x40006b7ad0) Data frame received for 5\nI0817 12:45:39.120114 2592 log.go:181] (0x40006ae0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31543\nConnection to 172.18.0.14 31543 port [tcp/31543] succeeded!\nI0817 12:45:39.121678 2592 log.go:181] (0x40006b7ad0) (0x40006ae000) Stream removed, broadcasting: 1\nI0817 12:45:39.122750 2592 log.go:181] (0x40006b7ad0) Go away received\nI0817 12:45:39.125945 2592 log.go:181] (0x40006b7ad0) (0x40006ae000) Stream removed, broadcasting: 1\nI0817 12:45:39.126259 2592 log.go:181] (0x40006b7ad0) (0x4000642000) Stream removed, broadcasting: 3\nI0817 12:45:39.126505 2592 log.go:181] (0x40006b7ad0) (0x40006ae0a0) Stream removed, broadcasting: 5\n" Aug 17 12:45:39.134: INFO: stdout: "" Aug 17 12:45:39.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7901 execpod-affinityvmvqm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31543/ ; done' Aug 17 12:45:40.922: INFO: stderr: "I0817 12:45:40.738519 2612 log.go:181] (0x4000015e40) (0x4000c08500) Create stream\nI0817 12:45:40.743570 2612 log.go:181] (0x4000015e40) (0x4000c08500) Stream added, broadcasting: 1\nI0817 12:45:40.760017 2612 log.go:181] (0x4000015e40) Reply frame received for 1\nI0817 12:45:40.760564 2612 log.go:181] (0x4000015e40) (0x4000224140) Create stream\nI0817 12:45:40.760620 2612 log.go:181] (0x4000015e40) (0x4000224140) Stream added, broadcasting: 3\nI0817 12:45:40.762101 2612 log.go:181] (0x4000015e40) Reply frame received for 3\nI0817 12:45:40.762464 2612 log.go:181] (0x4000015e40) (0x4000c08000) Create stream\nI0817 12:45:40.762543 2612 log.go:181] (0x4000015e40) (0x4000c08000) Stream added, broadcasting: 5\nI0817 12:45:40.763590 2612 log.go:181] (0x4000015e40) Reply frame received for 5\nI0817 12:45:40.834729 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.835073 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.835216 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.835372 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.835856 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.837138 2612 log.go:181] (0x4000c08000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.837527 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.837604 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.837681 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.838215 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.838283 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.838354 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.838405 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.838448 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.838496 2612 log.go:181] (0x4000c08000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.841216 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.841382 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.841572 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.841743 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.841868 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.842103 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.842218 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.842357 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.842468 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.844912 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.844988 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.845064 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.845422 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.845517 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.845607 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.845734 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.845841 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.845909 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.848881 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.848959 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.849031 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.849334 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.849413 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.849480 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.849542 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.849606 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.849675 2612 log.go:181] (0x4000c08000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0817 12:45:40.849734 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.849785 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.849848 2612 log.go:181] (0x4000c08000) (5) Data frame sent\n 2 http://172.18.0.11:31543/\nI0817 12:45:40.853342 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.853476 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.853607 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.853702 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.853778 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.853867 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.853964 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.854034 2612 log.go:181] (0x4000c08000) (5) Data frame sent\n+ echo\n+ curl -q -sI0817 12:45:40.854100 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.854211 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.854297 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.854384 2612 log.go:181] (0x4000c08000) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.857126 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.857238 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.857325 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.857605 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.857698 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.857771 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.857839 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.857900 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.857998 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.861000 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.861123 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.861260 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.861378 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.861467 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.861582 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.861742 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.861866 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.862000 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.867983 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.868059 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.868173 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.868681 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.868832 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.868910 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.868985 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.869049 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.869106 2612 log.go:181] (0x4000224140) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.873047 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.873112 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.873180 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.873511 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.873621 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.873712 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.873784 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.873847 2612 log.go:181] (0x4000c08000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.873905 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.877217 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.877282 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.877353 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.877732 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.877833 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.877922 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.878013 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.878110 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.878178 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.881704 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.881770 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.881841 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.882187 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.882288 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.882374 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.882499 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.882608 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.882735 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.886216 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.886313 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.886438 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.886519 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.886597 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.886729 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.886830 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.887015 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.887146 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.892259 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.892397 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.892547 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.892707 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.892960 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -qI0817 12:45:40.893085 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.893200 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.893294 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.893401 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.893512 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.893594 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.893663 2612 log.go:181] (0x4000c08000) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.896356 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.896462 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.896542 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.896647 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.896708 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.896931 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.897025 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.897097 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.897183 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.899997 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.900097 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.900205 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.900309 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.900409 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.900488 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.900577 2612 log.go:181] (0x4000c08000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:40.900655 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.900801 2612 log.go:181] (0x4000c08000) (5) Data frame sent\nI0817 12:45:40.903922 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.904030 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.904225 2612 log.go:181] (0x4000224140) (3) Data frame sent\nI0817 12:45:40.904339 2612 log.go:181] (0x4000015e40) Data frame received for 3\nI0817 12:45:40.904457 2612 log.go:181] (0x4000224140) (3) Data frame handling\nI0817 12:45:40.904554 2612 log.go:181] (0x4000015e40) Data frame received for 5\nI0817 12:45:40.904634 2612 log.go:181] (0x4000c08000) (5) Data frame handling\nI0817 12:45:40.905735 2612 log.go:181] (0x4000015e40) Data frame received for 1\nI0817 12:45:40.905827 2612 log.go:181] (0x4000c08500) (1) Data frame handling\nI0817 12:45:40.905893 2612 log.go:181] (0x4000c08500) (1) Data frame sent\nI0817 12:45:40.907122 2612 log.go:181] (0x4000015e40) (0x4000c08500) Stream removed, broadcasting: 1\nI0817 12:45:40.908945 2612 log.go:181] (0x4000015e40) Go away received\nI0817 12:45:40.912282 2612 log.go:181] (0x4000015e40) (0x4000c08500) Stream removed, broadcasting: 1\nI0817 12:45:40.912576 2612 log.go:181] (0x4000015e40) (0x4000224140) Stream removed, broadcasting: 3\nI0817 12:45:40.912854 2612 log.go:181] (0x4000015e40) (0x4000c08000) Stream removed, broadcasting: 5\n" Aug 17 12:45:40.927: INFO: stdout: "\naffinity-nodeport-transition-zhnz9\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-qch7r\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-qch7r\naffinity-nodeport-transition-zhnz9\naffinity-nodeport-transition-qch7r\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-zhnz9\naffinity-nodeport-transition-zhnz9\naffinity-nodeport-transition-qch7r\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-zhnz9\naffinity-nodeport-transition-zhnz9" Aug 17 12:45:40.927: INFO: Received response from host: affinity-nodeport-transition-zhnz9 Aug 17 12:45:40.927: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:40.927: INFO: Received response from host: affinity-nodeport-transition-qch7r Aug 17 12:45:40.927: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:40.927: INFO: Received response from host: affinity-nodeport-transition-qch7r Aug 17 12:45:40.927: INFO: Received response from host: affinity-nodeport-transition-zhnz9 Aug 17 12:45:40.927: INFO: Received response from host: affinity-nodeport-transition-qch7r Aug 17 12:45:40.928: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:40.928: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:40.928: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:40.928: INFO: Received response from host: affinity-nodeport-transition-zhnz9 Aug 17 12:45:40.928: INFO: Received response from host: affinity-nodeport-transition-zhnz9 Aug 17 12:45:40.928: INFO: Received response from host: affinity-nodeport-transition-qch7r Aug 17 12:45:40.928: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:40.928: INFO: Received response from host: affinity-nodeport-transition-zhnz9 Aug 17 12:45:40.928: INFO: Received response from host: affinity-nodeport-transition-zhnz9 Aug 17 12:45:40.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7901 execpod-affinityvmvqm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31543/ ; done' Aug 17 12:45:42.626: INFO: stderr: "I0817 12:45:42.436071 2632 log.go:181] (0x400003a580) (0x4000bfad20) Create stream\nI0817 12:45:42.440884 2632 log.go:181] (0x400003a580) (0x4000bfad20) Stream added, broadcasting: 1\nI0817 12:45:42.450454 2632 log.go:181] (0x400003a580) Reply frame received for 1\nI0817 12:45:42.451223 2632 log.go:181] (0x400003a580) (0x40008c1720) Create stream\nI0817 12:45:42.451306 2632 log.go:181] (0x400003a580) (0x40008c1720) Stream added, broadcasting: 3\nI0817 12:45:42.452886 2632 log.go:181] (0x400003a580) Reply frame received for 3\nI0817 12:45:42.453162 2632 log.go:181] (0x400003a580) (0x4000c3d5e0) Create stream\nI0817 12:45:42.453228 2632 log.go:181] (0x400003a580) (0x4000c3d5e0) Stream added, broadcasting: 5\nI0817 12:45:42.454518 2632 log.go:181] (0x400003a580) Reply frame received for 5\nI0817 12:45:42.530721 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.531364 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.531646 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.531858 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.532461 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\nI0817 12:45:42.532698 2632 log.go:181] (0x40008c1720) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.533250 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.533309 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.533627 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.533718 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.533780 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\nI0817 12:45:42.533827 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.533877 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.534361 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\nI0817 12:45:42.534484 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.534583 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.534652 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.534752 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.534860 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.534958 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.535031 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.535086 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.535144 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.535200 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.535274 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.535336 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.535408 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.540418 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.540475 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.540532 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.540983 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.541069 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.541141 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.541204 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.541262 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.541319 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.545734 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.545837 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.545984 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.546084 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.546185 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.546272 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.546329 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.546380 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.546425 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.549782 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.549904 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.550039 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.550119 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.550199 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.550256 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.550337 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.550413 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.550506 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.555480 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.555594 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.555710 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.555971 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.556071 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.556161 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.556257 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.556331 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.556414 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.559802 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.559929 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.560053 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.560501 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.560592 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.560667 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.560840 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.560960 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.561085 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.566093 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.566199 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.566291 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.566401 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.566483 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.566578 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.566681 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.566772 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.566877 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.570190 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.570302 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.570434 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.570535 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.570637 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.570766 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.570865 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\n+ echo\n+ curl -qI0817 12:45:42.570961 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.571052 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\nI0817 12:45:42.571143 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.571225 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.571338 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.576124 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.576289 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.576492 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.576658 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.576894 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.577013 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.577111 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0817 12:45:42.577296 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.577451 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\nI0817 12:45:42.577605 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.577729 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.577885 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n 2 http://172.18.0.11:31543/\nI0817 12:45:42.583124 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.583228 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.583349 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.583921 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.584082 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.584225 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.584342 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.584432 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.584536 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -qI0817 12:45:42.584628 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.584716 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\n -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.584987 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\nI0817 12:45:42.587505 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.587566 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.587630 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.588218 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.588279 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.588336 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.588387 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.588444 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.588498 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.595158 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.595323 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.595444 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.598982 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.599089 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.599188 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.599682 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.599777 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.599871 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.604904 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.604993 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.605067 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.605115 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.605166 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.605225 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/\nI0817 12:45:42.605293 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.605349 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.605402 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.608233 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.608278 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.608331 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.608533 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.608602 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31543/I0817 12:45:42.608667 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.608712 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.608791 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.608828 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\nI0817 12:45:42.608865 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.608898 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.608940 2632 log.go:181] (0x4000c3d5e0) (5) Data frame sent\n\nI0817 12:45:42.611938 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.611979 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.612023 2632 log.go:181] (0x40008c1720) (3) Data frame sent\nI0817 12:45:42.612481 2632 log.go:181] (0x400003a580) Data frame received for 3\nI0817 12:45:42.612580 2632 log.go:181] (0x40008c1720) (3) Data frame handling\nI0817 12:45:42.612711 2632 log.go:181] (0x400003a580) Data frame received for 5\nI0817 12:45:42.612796 2632 log.go:181] (0x4000c3d5e0) (5) Data frame handling\nI0817 12:45:42.614127 2632 log.go:181] (0x400003a580) Data frame received for 1\nI0817 12:45:42.614184 2632 log.go:181] (0x4000bfad20) (1) Data frame handling\nI0817 12:45:42.614235 2632 log.go:181] (0x4000bfad20) (1) Data frame sent\nI0817 12:45:42.614689 2632 log.go:181] (0x400003a580) (0x4000bfad20) Stream removed, broadcasting: 1\nI0817 12:45:42.616695 2632 log.go:181] (0x400003a580) Go away received\nI0817 12:45:42.618356 2632 log.go:181] (0x400003a580) (0x4000bfad20) Stream removed, broadcasting: 1\nI0817 12:45:42.618655 2632 log.go:181] (0x400003a580) (0x40008c1720) Stream removed, broadcasting: 3\nI0817 12:45:42.618834 2632 log.go:181] (0x400003a580) (0x4000c3d5e0) Stream removed, broadcasting: 5\n" Aug 17 12:45:42.630: INFO: stdout: "\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q\naffinity-nodeport-transition-hwb6q" Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Received response from host: affinity-nodeport-transition-hwb6q Aug 17 12:45:42.630: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-7901, will wait for the garbage collector to delete the pods Aug 17 12:45:42.736: INFO: Deleting ReplicationController affinity-nodeport-transition took: 22.80859ms Aug 17 12:45:43.236: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.612866ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:46:00.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7901" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:50.402 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":206,"skipped":3543,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:46:00.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 12:46:07.598: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 12:46:09.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265167, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265167, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265168, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265167, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:46:11.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265167, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265167, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265168, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265167, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:46:14.662: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:46:14.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-556" for this suite. STEP: Destroying namespace "webhook-556-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.157 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":207,"skipped":3548,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:46:14.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:46:20.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1599" for this suite. • [SLOW TEST:6.131 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":208,"skipped":3552,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:46:20.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-p22m STEP: Creating a pod to test atomic-volume-subpath Aug 17 12:46:21.144: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p22m" in namespace "subpath-4046" to be "Succeeded or Failed" Aug 17 12:46:21.149: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.521247ms Aug 17 12:46:23.156: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011116858s Aug 17 12:46:25.164: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 4.019369098s Aug 17 12:46:27.171: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 6.026460868s Aug 17 12:46:29.178: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 8.033264155s Aug 17 12:46:31.257: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 10.112891246s Aug 17 12:46:33.263: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 12.118968463s Aug 17 12:46:35.316: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 14.171006533s Aug 17 12:46:37.322: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 16.177929254s Aug 17 12:46:39.347: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 18.202005068s Aug 17 12:46:41.353: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 20.208465786s Aug 17 12:46:43.360: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Running", Reason="", readiness=true. Elapsed: 22.214985783s Aug 17 12:46:45.367: INFO: Pod "pod-subpath-test-configmap-p22m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.222753078s STEP: Saw pod success Aug 17 12:46:45.367: INFO: Pod "pod-subpath-test-configmap-p22m" satisfied condition "Succeeded or Failed" Aug 17 12:46:45.373: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-p22m container test-container-subpath-configmap-p22m: STEP: delete the pod Aug 17 12:46:45.426: INFO: Waiting for pod pod-subpath-test-configmap-p22m to disappear Aug 17 12:46:45.430: INFO: Pod pod-subpath-test-configmap-p22m no longer exists STEP: Deleting pod pod-subpath-test-configmap-p22m Aug 17 12:46:45.430: INFO: Deleting pod "pod-subpath-test-configmap-p22m" in namespace "subpath-4046" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:46:45.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4046" for this suite. • [SLOW TEST:24.627 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":209,"skipped":3579,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:46:45.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 17 12:46:45.619: INFO: Waiting up to 5m0s for pod "pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2" in namespace "emptydir-6097" to be "Succeeded or Failed" Aug 17 12:46:45.634: INFO: Pod "pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.391853ms Aug 17 12:46:47.671: INFO: Pod "pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052253907s Aug 17 12:46:49.695: INFO: Pod "pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076193335s Aug 17 12:46:51.757: INFO: Pod "pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138502451s Aug 17 12:46:53.765: INFO: Pod "pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146432887s STEP: Saw pod success Aug 17 12:46:53.766: INFO: Pod "pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2" satisfied condition "Succeeded or Failed" Aug 17 12:46:53.809: INFO: Trying to get logs from node latest-worker pod pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2 container test-container: STEP: delete the pod Aug 17 12:46:53.953: INFO: Waiting for pod pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2 to disappear Aug 17 12:46:53.982: INFO: Pod pod-7f86a2ec-00d5-4956-a91b-8bedf464fff2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:46:53.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6097" for this suite. • [SLOW TEST:8.494 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3579,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:46:53.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 17 12:47:08.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 12:47:08.798: INFO: Pod pod-with-prestop-http-hook still exists Aug 17 12:47:10.799: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 12:47:10.810: INFO: Pod pod-with-prestop-http-hook still exists Aug 17 12:47:12.798: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 12:47:13.061: INFO: Pod pod-with-prestop-http-hook still exists Aug 17 12:47:14.798: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 12:47:15.046: INFO: Pod pod-with-prestop-http-hook still exists Aug 17 12:47:16.799: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 17 12:47:16.862: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:47:16.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-168" for this suite. • [SLOW TEST:22.906 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":211,"skipped":3590,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:47:16.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 17 12:47:17.245: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 12:47:17.629: INFO: Waiting for terminating namespaces to be deleted... Aug 17 12:47:18.024: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 17 12:47:18.034: INFO: pod-handle-http-request from container-lifecycle-hook-168 started at 2020-08-17 12:46:54 +0000 UTC (1 container statuses recorded) Aug 17 12:47:18.035: INFO: Container pod-handle-http-request ready: true, restart count 0 Aug 17 12:47:18.035: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 12:47:18.035: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 12:47:18.035: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 12:47:18.035: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 12:47:18.035: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 17 12:47:18.315: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 17 12:47:18.316: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 12:47:18.316: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 17 12:47:18.316: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Aug 17 12:47:18.555: INFO: Pod pod-handle-http-request requesting resource cpu=0m on Node latest-worker Aug 17 12:47:18.556: INFO: Pod kindnet-gmpqb requesting resource cpu=100m on Node latest-worker Aug 17 12:47:18.556: INFO: Pod kindnet-grzzh requesting resource cpu=100m on Node latest-worker2 Aug 17 12:47:18.556: INFO: Pod kube-proxy-82wrf requesting resource cpu=0m on Node latest-worker Aug 17 12:47:18.556: INFO: Pod kube-proxy-fjk8r requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Aug 17 12:47:18.556: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Aug 17 12:47:18.567: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-d16baa4b-ff82-4901-a29a-41b8fc8e0e8a.162c0efca030bf09], Reason = [Started], Message = [Started container filler-pod-d16baa4b-ff82-4901-a29a-41b8fc8e0e8a] STEP: Considering event: Type = [Normal], Name = [filler-pod-d16baa4b-ff82-4901-a29a-41b8fc8e0e8a.162c0efb623af1b3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f1e651eb-cf5b-4455-afcd-358b07fd6f86.162c0efaeaa90615], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8208/filler-pod-f1e651eb-cf5b-4455-afcd-358b07fd6f86 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d16baa4b-ff82-4901-a29a-41b8fc8e0e8a.162c0efc5d182244], Reason = [Created], Message = [Created container filler-pod-d16baa4b-ff82-4901-a29a-41b8fc8e0e8a] STEP: Considering event: Type = [Normal], Name = [filler-pod-f1e651eb-cf5b-4455-afcd-358b07fd6f86.162c0efc48f6afd1], Reason = [Created], Message = [Created container filler-pod-f1e651eb-cf5b-4455-afcd-358b07fd6f86] STEP: Considering event: Type = [Normal], Name = [filler-pod-d16baa4b-ff82-4901-a29a-41b8fc8e0e8a.162c0efae87be39e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8208/filler-pod-d16baa4b-ff82-4901-a29a-41b8fc8e0e8a to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-f1e651eb-cf5b-4455-afcd-358b07fd6f86.162c0efb61f10e57], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f1e651eb-cf5b-4455-afcd-358b07fd6f86.162c0efc820e4d5a], Reason = [Started], Message = [Started container filler-pod-f1e651eb-cf5b-4455-afcd-358b07fd6f86] STEP: Considering event: Type = [Warning], Name = [additional-pod.162c0efce88c90dd], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.162c0efcea3a16b5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:47:28.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8208" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.729 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":212,"skipped":3611,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:47:28.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5323 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-5323 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5323 Aug 17 12:47:28.873: INFO: Found 0 stateful pods, waiting for 1 Aug 17 12:47:38.882: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 17 12:47:38.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 12:47:41.573: INFO: stderr: "I0817 12:47:41.030194 2652 log.go:181] (0x40007bf340) (0x4000d0c6e0) Create stream\nI0817 12:47:41.032807 2652 log.go:181] (0x40007bf340) (0x4000d0c6e0) Stream added, broadcasting: 1\nI0817 12:47:41.053397 2652 log.go:181] (0x40007bf340) Reply frame received for 1\nI0817 12:47:41.054113 2652 log.go:181] (0x40007bf340) (0x4000d0c000) Create stream\nI0817 12:47:41.054186 2652 log.go:181] (0x40007bf340) (0x4000d0c000) Stream added, broadcasting: 3\nI0817 12:47:41.055541 2652 log.go:181] (0x40007bf340) Reply frame received for 3\nI0817 12:47:41.055824 2652 log.go:181] (0x40007bf340) (0x4000738000) Create stream\nI0817 12:47:41.055893 2652 log.go:181] (0x40007bf340) (0x4000738000) Stream added, broadcasting: 5\nI0817 12:47:41.057087 2652 log.go:181] (0x40007bf340) Reply frame received for 5\nI0817 12:47:41.116177 2652 log.go:181] (0x40007bf340) Data frame received for 5\nI0817 12:47:41.116560 2652 log.go:181] (0x4000738000) (5) Data frame handling\nI0817 12:47:41.117271 2652 log.go:181] (0x4000738000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 12:47:41.550670 2652 log.go:181] (0x40007bf340) Data frame received for 3\nI0817 12:47:41.550864 2652 log.go:181] (0x40007bf340) Data frame received for 5\nI0817 12:47:41.550968 2652 log.go:181] (0x4000738000) (5) Data frame handling\nI0817 12:47:41.551038 2652 log.go:181] (0x4000d0c000) (3) Data frame handling\nI0817 12:47:41.551130 2652 log.go:181] (0x4000d0c000) (3) Data frame sent\nI0817 12:47:41.551207 2652 log.go:181] (0x40007bf340) Data frame received for 3\nI0817 12:47:41.551274 2652 log.go:181] (0x4000d0c000) (3) Data frame handling\nI0817 12:47:41.552495 2652 log.go:181] (0x40007bf340) Data frame received for 1\nI0817 12:47:41.552636 2652 log.go:181] (0x4000d0c6e0) (1) Data frame handling\nI0817 12:47:41.552832 2652 log.go:181] (0x4000d0c6e0) (1) Data frame sent\nI0817 12:47:41.553432 2652 log.go:181] (0x40007bf340) (0x4000d0c6e0) Stream removed, broadcasting: 1\nI0817 12:47:41.556678 2652 log.go:181] (0x40007bf340) Go away received\nI0817 12:47:41.558765 2652 log.go:181] (0x40007bf340) (0x4000d0c6e0) Stream removed, broadcasting: 1\nI0817 12:47:41.559448 2652 log.go:181] (0x40007bf340) (0x4000d0c000) Stream removed, broadcasting: 3\nI0817 12:47:41.560060 2652 log.go:181] (0x40007bf340) (0x4000738000) Stream removed, broadcasting: 5\n" Aug 17 12:47:41.574: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 12:47:41.574: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 12:47:41.646: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 17 12:47:51.675: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 17 12:47:51.675: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 12:47:51.827: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 12:47:51.827: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC }] Aug 17 12:47:51.827: INFO: Aug 17 12:47:51.828: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 17 12:47:53.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97517921s Aug 17 12:47:54.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.774155491s Aug 17 12:47:55.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.748442465s Aug 17 12:47:56.627: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.457745433s Aug 17 12:47:57.909: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.176577231s Aug 17 12:47:59.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.894137961s Aug 17 12:48:00.018: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.795667207s Aug 17 12:48:01.028: INFO: Verifying statefulset ss doesn't scale past 3 for another 785.174135ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5323 Aug 17 12:48:02.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 12:48:04.318: INFO: stderr: "I0817 12:48:03.998322 2673 log.go:181] (0x400022ab00) (0x4000bc9c20) Create stream\nI0817 12:48:04.000792 2673 log.go:181] (0x400022ab00) (0x4000bc9c20) Stream added, broadcasting: 1\nI0817 12:48:04.012000 2673 log.go:181] (0x400022ab00) Reply frame received for 1\nI0817 12:48:04.013408 2673 log.go:181] (0x400022ab00) (0x4000bc9d60) Create stream\nI0817 12:48:04.013529 2673 log.go:181] (0x400022ab00) (0x4000bc9d60) Stream added, broadcasting: 3\nI0817 12:48:04.015190 2673 log.go:181] (0x400022ab00) Reply frame received for 3\nI0817 12:48:04.015465 2673 log.go:181] (0x400022ab00) (0x4000158000) Create stream\nI0817 12:48:04.015558 2673 log.go:181] (0x400022ab00) (0x4000158000) Stream added, broadcasting: 5\nI0817 12:48:04.016975 2673 log.go:181] (0x400022ab00) Reply frame received for 5\nI0817 12:48:04.096525 2673 log.go:181] (0x400022ab00) Data frame received for 5\nI0817 12:48:04.097092 2673 log.go:181] (0x4000158000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 12:48:04.098701 2673 log.go:181] (0x4000158000) (5) Data frame sent\nI0817 12:48:04.296343 2673 log.go:181] (0x400022ab00) Data frame received for 3\nI0817 12:48:04.296527 2673 log.go:181] (0x4000bc9d60) (3) Data frame handling\nI0817 12:48:04.296680 2673 log.go:181] (0x400022ab00) Data frame received for 5\nI0817 12:48:04.296947 2673 log.go:181] (0x4000158000) (5) Data frame handling\nI0817 12:48:04.297191 2673 log.go:181] (0x4000bc9d60) (3) Data frame sent\nI0817 12:48:04.297339 2673 log.go:181] (0x400022ab00) Data frame received for 3\nI0817 12:48:04.297480 2673 log.go:181] (0x4000bc9d60) (3) Data frame handling\nI0817 12:48:04.298611 2673 log.go:181] (0x400022ab00) Data frame received for 1\nI0817 12:48:04.298747 2673 log.go:181] (0x4000bc9c20) (1) Data frame handling\nI0817 12:48:04.298904 2673 log.go:181] (0x4000bc9c20) (1) Data frame sent\nI0817 12:48:04.299846 2673 log.go:181] (0x400022ab00) (0x4000bc9c20) Stream removed, broadcasting: 1\nI0817 12:48:04.303031 2673 log.go:181] (0x400022ab00) Go away received\nI0817 12:48:04.306003 2673 log.go:181] (0x400022ab00) (0x4000bc9c20) Stream removed, broadcasting: 1\nI0817 12:48:04.306384 2673 log.go:181] (0x400022ab00) (0x4000bc9d60) Stream removed, broadcasting: 3\nI0817 12:48:04.306618 2673 log.go:181] (0x400022ab00) (0x4000158000) Stream removed, broadcasting: 5\n" Aug 17 12:48:04.319: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 12:48:04.319: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 12:48:04.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 12:48:05.914: INFO: stderr: "I0817 12:48:05.818670 2694 log.go:181] (0x40008be000) (0x40008a41e0) Create stream\nI0817 12:48:05.824842 2694 log.go:181] (0x40008be000) (0x40008a41e0) Stream added, broadcasting: 1\nI0817 12:48:05.834782 2694 log.go:181] (0x40008be000) Reply frame received for 1\nI0817 12:48:05.835365 2694 log.go:181] (0x40008be000) (0x40008a4280) Create stream\nI0817 12:48:05.835440 2694 log.go:181] (0x40008be000) (0x40008a4280) Stream added, broadcasting: 3\nI0817 12:48:05.836707 2694 log.go:181] (0x40008be000) Reply frame received for 3\nI0817 12:48:05.837033 2694 log.go:181] (0x40008be000) (0x4000cc2000) Create stream\nI0817 12:48:05.837095 2694 log.go:181] (0x40008be000) (0x4000cc2000) Stream added, broadcasting: 5\nI0817 12:48:05.837937 2694 log.go:181] (0x40008be000) Reply frame received for 5\nI0817 12:48:05.892581 2694 log.go:181] (0x40008be000) Data frame received for 3\nI0817 12:48:05.892962 2694 log.go:181] (0x40008a4280) (3) Data frame handling\nI0817 12:48:05.893204 2694 log.go:181] (0x40008be000) Data frame received for 5\nI0817 12:48:05.893321 2694 log.go:181] (0x4000cc2000) (5) Data frame handling\nI0817 12:48:05.893878 2694 log.go:181] (0x40008be000) Data frame received for 1\nI0817 12:48:05.894084 2694 log.go:181] (0x40008a41e0) (1) Data frame handling\nI0817 12:48:05.894244 2694 log.go:181] (0x40008a41e0) (1) Data frame sent\nI0817 12:48:05.894614 2694 log.go:181] (0x40008a4280) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0817 12:48:05.895167 2694 log.go:181] (0x4000cc2000) (5) Data frame sent\nI0817 12:48:05.895234 2694 log.go:181] (0x40008be000) Data frame received for 5\nI0817 12:48:05.895281 2694 log.go:181] (0x4000cc2000) (5) Data frame handling\nI0817 12:48:05.895910 2694 log.go:181] (0x40008be000) Data frame received for 3\nI0817 12:48:05.896004 2694 log.go:181] (0x40008a4280) (3) Data frame handling\nI0817 12:48:05.897343 2694 log.go:181] (0x40008be000) (0x40008a41e0) Stream removed, broadcasting: 1\nI0817 12:48:05.899439 2694 log.go:181] (0x40008be000) Go away received\nI0817 12:48:05.902965 2694 log.go:181] (0x40008be000) (0x40008a41e0) Stream removed, broadcasting: 1\nI0817 12:48:05.903303 2694 log.go:181] (0x40008be000) (0x40008a4280) Stream removed, broadcasting: 3\nI0817 12:48:05.903526 2694 log.go:181] (0x40008be000) (0x4000cc2000) Stream removed, broadcasting: 5\n" Aug 17 12:48:05.915: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 12:48:05.915: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 12:48:05.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 12:48:07.486: INFO: stderr: "I0817 12:48:07.363582 2714 log.go:181] (0x4000db3340) (0x4000e085a0) Create stream\nI0817 12:48:07.368146 2714 log.go:181] (0x4000db3340) (0x4000e085a0) Stream added, broadcasting: 1\nI0817 12:48:07.387898 2714 log.go:181] (0x4000db3340) Reply frame received for 1\nI0817 12:48:07.388461 2714 log.go:181] (0x4000db3340) (0x4000c3a0a0) Create stream\nI0817 12:48:07.388520 2714 log.go:181] (0x4000db3340) (0x4000c3a0a0) Stream added, broadcasting: 3\nI0817 12:48:07.390138 2714 log.go:181] (0x4000db3340) Reply frame received for 3\nI0817 12:48:07.390515 2714 log.go:181] (0x4000db3340) (0x400046e000) Create stream\nI0817 12:48:07.390598 2714 log.go:181] (0x4000db3340) (0x400046e000) Stream added, broadcasting: 5\nI0817 12:48:07.391780 2714 log.go:181] (0x4000db3340) Reply frame received for 5\nI0817 12:48:07.463631 2714 log.go:181] (0x4000db3340) Data frame received for 3\nI0817 12:48:07.464060 2714 log.go:181] (0x4000db3340) Data frame received for 1\nI0817 12:48:07.464402 2714 log.go:181] (0x4000db3340) Data frame received for 5\nI0817 12:48:07.464643 2714 log.go:181] (0x4000e085a0) (1) Data frame handling\nI0817 12:48:07.464964 2714 log.go:181] (0x400046e000) (5) Data frame handling\nI0817 12:48:07.465155 2714 log.go:181] (0x4000c3a0a0) (3) Data frame handling\nI0817 12:48:07.466733 2714 log.go:181] (0x4000e085a0) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0817 12:48:07.467516 2714 log.go:181] (0x400046e000) (5) Data frame sent\nI0817 12:48:07.467704 2714 log.go:181] (0x4000db3340) Data frame received for 5\nI0817 12:48:07.467822 2714 log.go:181] (0x4000c3a0a0) (3) Data frame sent\nI0817 12:48:07.467962 2714 log.go:181] (0x4000db3340) Data frame received for 3\nI0817 12:48:07.468077 2714 log.go:181] (0x400046e000) (5) Data frame handling\nI0817 12:48:07.469715 2714 log.go:181] (0x4000db3340) (0x4000e085a0) Stream removed, broadcasting: 1\nI0817 12:48:07.472239 2714 log.go:181] (0x4000c3a0a0) (3) Data frame handling\nI0817 12:48:07.473373 2714 log.go:181] (0x4000db3340) Go away received\nI0817 12:48:07.476047 2714 log.go:181] (0x4000db3340) (0x4000e085a0) Stream removed, broadcasting: 1\nI0817 12:48:07.476552 2714 log.go:181] (0x4000db3340) (0x4000c3a0a0) Stream removed, broadcasting: 3\nI0817 12:48:07.476826 2714 log.go:181] (0x4000db3340) (0x400046e000) Stream removed, broadcasting: 5\n" Aug 17 12:48:07.487: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 12:48:07.487: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 12:48:07.495: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 12:48:07.495: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 12:48:07.495: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 17 12:48:07.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 12:48:09.089: INFO: stderr: "I0817 12:48:08.984206 2734 log.go:181] (0x4000935c30) (0x400092ca00) Create stream\nI0817 12:48:08.987752 2734 log.go:181] (0x4000935c30) (0x400092ca00) Stream added, broadcasting: 1\nI0817 12:48:09.007147 2734 log.go:181] (0x4000935c30) Reply frame received for 1\nI0817 12:48:09.007894 2734 log.go:181] (0x4000935c30) (0x400092c000) Create stream\nI0817 12:48:09.007969 2734 log.go:181] (0x4000935c30) (0x400092c000) Stream added, broadcasting: 3\nI0817 12:48:09.009370 2734 log.go:181] (0x4000935c30) Reply frame received for 3\nI0817 12:48:09.009669 2734 log.go:181] (0x4000935c30) (0x40009100a0) Create stream\nI0817 12:48:09.009753 2734 log.go:181] (0x4000935c30) (0x40009100a0) Stream added, broadcasting: 5\nI0817 12:48:09.010742 2734 log.go:181] (0x4000935c30) Reply frame received for 5\nI0817 12:48:09.073578 2734 log.go:181] (0x4000935c30) Data frame received for 5\nI0817 12:48:09.073930 2734 log.go:181] (0x40009100a0) (5) Data frame handling\nI0817 12:48:09.074776 2734 log.go:181] (0x40009100a0) (5) Data frame sent\nI0817 12:48:09.074875 2734 log.go:181] (0x4000935c30) Data frame received for 3\nI0817 12:48:09.075006 2734 log.go:181] (0x400092c000) (3) Data frame handling\nI0817 12:48:09.075146 2734 log.go:181] (0x400092c000) (3) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 12:48:09.075240 2734 log.go:181] (0x4000935c30) Data frame received for 3\nI0817 12:48:09.075316 2734 log.go:181] (0x400092c000) (3) Data frame handling\nI0817 12:48:09.075761 2734 log.go:181] (0x4000935c30) Data frame received for 5\nI0817 12:48:09.075863 2734 log.go:181] (0x40009100a0) (5) Data frame handling\nI0817 12:48:09.076323 2734 log.go:181] (0x4000935c30) Data frame received for 1\nI0817 12:48:09.076425 2734 log.go:181] (0x400092ca00) (1) Data frame handling\nI0817 12:48:09.076537 2734 log.go:181] (0x400092ca00) (1) Data frame sent\nI0817 12:48:09.078140 2734 log.go:181] (0x4000935c30) (0x400092ca00) Stream removed, broadcasting: 1\nI0817 12:48:09.079395 2734 log.go:181] (0x4000935c30) Go away received\nI0817 12:48:09.083208 2734 log.go:181] (0x4000935c30) (0x400092ca00) Stream removed, broadcasting: 1\nI0817 12:48:09.083619 2734 log.go:181] (0x4000935c30) (0x400092c000) Stream removed, broadcasting: 3\nI0817 12:48:09.083810 2734 log.go:181] (0x4000935c30) (0x40009100a0) Stream removed, broadcasting: 5\n" Aug 17 12:48:09.090: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 12:48:09.090: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 12:48:09.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 12:48:11.087: INFO: stderr: "I0817 12:48:10.929105 2754 log.go:181] (0x400003a840) (0x40005b41e0) Create stream\nI0817 12:48:10.931953 2754 log.go:181] (0x400003a840) (0x40005b41e0) Stream added, broadcasting: 1\nI0817 12:48:10.941866 2754 log.go:181] (0x400003a840) Reply frame received for 1\nI0817 12:48:10.942594 2754 log.go:181] (0x400003a840) (0x4000998000) Create stream\nI0817 12:48:10.942663 2754 log.go:181] (0x400003a840) (0x4000998000) Stream added, broadcasting: 3\nI0817 12:48:10.943848 2754 log.go:181] (0x400003a840) Reply frame received for 3\nI0817 12:48:10.944092 2754 log.go:181] (0x400003a840) (0x40005b4280) Create stream\nI0817 12:48:10.944148 2754 log.go:181] (0x400003a840) (0x40005b4280) Stream added, broadcasting: 5\nI0817 12:48:10.945384 2754 log.go:181] (0x400003a840) Reply frame received for 5\nI0817 12:48:11.027812 2754 log.go:181] (0x400003a840) Data frame received for 5\nI0817 12:48:11.028013 2754 log.go:181] (0x40005b4280) (5) Data frame handling\nI0817 12:48:11.028364 2754 log.go:181] (0x40005b4280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 12:48:11.064888 2754 log.go:181] (0x400003a840) Data frame received for 3\nI0817 12:48:11.065011 2754 log.go:181] (0x4000998000) (3) Data frame handling\nI0817 12:48:11.065122 2754 log.go:181] (0x400003a840) Data frame received for 5\nI0817 12:48:11.065238 2754 log.go:181] (0x40005b4280) (5) Data frame handling\nI0817 12:48:11.065403 2754 log.go:181] (0x4000998000) (3) Data frame sent\nI0817 12:48:11.065474 2754 log.go:181] (0x400003a840) Data frame received for 3\nI0817 12:48:11.065533 2754 log.go:181] (0x4000998000) (3) Data frame handling\nI0817 12:48:11.067121 2754 log.go:181] (0x400003a840) Data frame received for 1\nI0817 12:48:11.067235 2754 log.go:181] (0x40005b41e0) (1) Data frame handling\nI0817 12:48:11.067338 2754 log.go:181] (0x40005b41e0) (1) Data frame sent\nI0817 12:48:11.067958 2754 log.go:181] (0x400003a840) (0x40005b41e0) Stream removed, broadcasting: 1\nI0817 12:48:11.071100 2754 log.go:181] (0x400003a840) Go away received\nI0817 12:48:11.075813 2754 log.go:181] (0x400003a840) (0x40005b41e0) Stream removed, broadcasting: 1\nI0817 12:48:11.076211 2754 log.go:181] (0x400003a840) (0x4000998000) Stream removed, broadcasting: 3\nI0817 12:48:11.076492 2754 log.go:181] (0x400003a840) (0x40005b4280) Stream removed, broadcasting: 5\n" Aug 17 12:48:11.088: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 12:48:11.088: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 12:48:11.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 12:48:12.712: INFO: stderr: "I0817 12:48:12.529343 2774 log.go:181] (0x40004062c0) (0x400022e0a0) Create stream\nI0817 12:48:12.536865 2774 log.go:181] (0x40004062c0) (0x400022e0a0) Stream added, broadcasting: 1\nI0817 12:48:12.558662 2774 log.go:181] (0x40004062c0) Reply frame received for 1\nI0817 12:48:12.559548 2774 log.go:181] (0x40004062c0) (0x400022e000) Create stream\nI0817 12:48:12.559642 2774 log.go:181] (0x40004062c0) (0x400022e000) Stream added, broadcasting: 3\nI0817 12:48:12.561149 2774 log.go:181] (0x40004062c0) Reply frame received for 3\nI0817 12:48:12.561535 2774 log.go:181] (0x40004062c0) (0x400022e140) Create stream\nI0817 12:48:12.561620 2774 log.go:181] (0x40004062c0) (0x400022e140) Stream added, broadcasting: 5\nI0817 12:48:12.563074 2774 log.go:181] (0x40004062c0) Reply frame received for 5\nI0817 12:48:12.621391 2774 log.go:181] (0x40004062c0) Data frame received for 5\nI0817 12:48:12.621673 2774 log.go:181] (0x400022e140) (5) Data frame handling\nI0817 12:48:12.622296 2774 log.go:181] (0x400022e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 12:48:12.692383 2774 log.go:181] (0x40004062c0) Data frame received for 5\nI0817 12:48:12.692589 2774 log.go:181] (0x400022e140) (5) Data frame handling\nI0817 12:48:12.692849 2774 log.go:181] (0x40004062c0) Data frame received for 3\nI0817 12:48:12.692993 2774 log.go:181] (0x400022e000) (3) Data frame handling\nI0817 12:48:12.693097 2774 log.go:181] (0x400022e000) (3) Data frame sent\nI0817 12:48:12.693169 2774 log.go:181] (0x40004062c0) Data frame received for 3\nI0817 12:48:12.693232 2774 log.go:181] (0x400022e000) (3) Data frame handling\nI0817 12:48:12.693491 2774 log.go:181] (0x40004062c0) Data frame received for 1\nI0817 12:48:12.693569 2774 log.go:181] (0x400022e0a0) (1) Data frame handling\nI0817 12:48:12.693644 2774 log.go:181] (0x400022e0a0) (1) Data frame sent\nI0817 12:48:12.695108 2774 log.go:181] (0x40004062c0) (0x400022e0a0) Stream removed, broadcasting: 1\nI0817 12:48:12.697703 2774 log.go:181] (0x40004062c0) Go away received\nI0817 12:48:12.701375 2774 log.go:181] (0x40004062c0) (0x400022e0a0) Stream removed, broadcasting: 1\nI0817 12:48:12.701675 2774 log.go:181] (0x40004062c0) (0x400022e000) Stream removed, broadcasting: 3\nI0817 12:48:12.701942 2774 log.go:181] (0x40004062c0) (0x400022e140) Stream removed, broadcasting: 5\n" Aug 17 12:48:12.714: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 12:48:12.714: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 12:48:12.714: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 12:48:12.719: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 17 12:48:23.003: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 17 12:48:23.004: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 17 12:48:23.004: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 17 12:48:23.096: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 12:48:23.096: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC }] Aug 17 12:48:23.097: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:23.097: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:23.098: INFO: Aug 17 12:48:23.098: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 12:48:24.200: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 12:48:24.201: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC }] Aug 17 12:48:24.201: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:24.202: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:24.202: INFO: Aug 17 12:48:24.202: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 12:48:25.212: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 12:48:25.212: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC }] Aug 17 12:48:25.212: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:25.212: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:25.213: INFO: Aug 17 12:48:25.213: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 12:48:26.320: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 12:48:26.320: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC }] Aug 17 12:48:26.320: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:26.321: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:26.321: INFO: Aug 17 12:48:26.321: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 12:48:27.329: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 12:48:27.329: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC }] Aug 17 12:48:27.330: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:27.330: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:27.330: INFO: Aug 17 12:48:27.330: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 12:48:28.341: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 12:48:28.341: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC }] Aug 17 12:48:28.341: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:28.341: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:28.342: INFO: Aug 17 12:48:28.342: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 12:48:29.351: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 12:48:29.351: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:28 +0000 UTC }] Aug 17 12:48:29.351: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:29.352: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:48:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 12:47:51 +0000 UTC }] Aug 17 12:48:29.352: INFO: Aug 17 12:48:29.352: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 12:48:30.359: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.693134978s Aug 17 12:48:31.367: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.685692098s Aug 17 12:48:32.373: INFO: Verifying statefulset ss doesn't scale past 0 for another 678.458786ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5323 Aug 17 12:48:33.379: INFO: Scaling statefulset ss to 0 Aug 17 12:48:33.392: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 17 12:48:33.395: INFO: Deleting all statefulset in ns statefulset-5323 Aug 17 12:48:33.399: INFO: Scaling statefulset ss to 0 Aug 17 12:48:33.411: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 12:48:33.416: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:48:33.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5323" for this suite. • [SLOW TEST:64.845 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":213,"skipped":3627,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:48:33.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Aug 17 12:48:33.648: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:48:33.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1580" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":214,"skipped":3633,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:48:33.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:48:34.067: INFO: Create a RollingUpdate DaemonSet Aug 17 12:48:34.074: INFO: Check that daemon pods launch on every node of the cluster Aug 17 12:48:34.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:34.164: INFO: Number of nodes with available pods: 0 Aug 17 12:48:34.164: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:35.177: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:35.183: INFO: Number of nodes with available pods: 0 Aug 17 12:48:35.183: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:36.208: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:36.286: INFO: Number of nodes with available pods: 0 Aug 17 12:48:36.286: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:37.383: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:37.401: INFO: Number of nodes with available pods: 0 Aug 17 12:48:37.401: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:38.176: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:38.183: INFO: Number of nodes with available pods: 0 Aug 17 12:48:38.183: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:39.640: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:39.735: INFO: Number of nodes with available pods: 0 Aug 17 12:48:39.735: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:40.371: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:40.622: INFO: Number of nodes with available pods: 0 Aug 17 12:48:40.622: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:41.222: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:41.317: INFO: Number of nodes with available pods: 0 Aug 17 12:48:41.317: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:42.222: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:42.403: INFO: Number of nodes with available pods: 1 Aug 17 12:48:42.403: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:43.232: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:43.265: INFO: Number of nodes with available pods: 1 Aug 17 12:48:43.266: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:48:44.178: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:44.183: INFO: Number of nodes with available pods: 2 Aug 17 12:48:44.183: INFO: Number of running nodes: 2, number of available pods: 2 Aug 17 12:48:44.184: INFO: Update the DaemonSet to trigger a rollout Aug 17 12:48:44.197: INFO: Updating DaemonSet daemon-set Aug 17 12:48:50.005: INFO: Roll back the DaemonSet before rollout is complete Aug 17 12:48:50.045: INFO: Updating DaemonSet daemon-set Aug 17 12:48:50.046: INFO: Make sure DaemonSet rollback is complete Aug 17 12:48:50.229: INFO: Wrong image for pod: daemon-set-gbk72. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 12:48:50.229: INFO: Pod daemon-set-gbk72 is not available Aug 17 12:48:50.258: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:51.267: INFO: Wrong image for pod: daemon-set-gbk72. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 12:48:51.267: INFO: Pod daemon-set-gbk72 is not available Aug 17 12:48:51.277: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:52.307: INFO: Wrong image for pod: daemon-set-gbk72. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 12:48:52.307: INFO: Pod daemon-set-gbk72 is not available Aug 17 12:48:52.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:53.267: INFO: Wrong image for pod: daemon-set-gbk72. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 17 12:48:53.267: INFO: Pod daemon-set-gbk72 is not available Aug 17 12:48:53.276: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:48:54.336: INFO: Pod daemon-set-xqpcl is not available Aug 17 12:48:54.345: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-439, will wait for the garbage collector to delete the pods Aug 17 12:48:54.419: INFO: Deleting DaemonSet.extensions daemon-set took: 9.470701ms Aug 17 12:48:55.020: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.855531ms Aug 17 12:48:57.725: INFO: Number of nodes with available pods: 0 Aug 17 12:48:57.725: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 12:48:57.748: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-439/daemonsets","resourceVersion":"728548"},"items":null} Aug 17 12:48:57.752: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-439/pods","resourceVersion":"728548"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:48:57.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-439" for this suite. • [SLOW TEST:23.937 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":215,"skipped":3656,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:48:57.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 17 12:49:04.071: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2706 PodName:pod-sharedvolume-ae7cc9ce-572b-4c61-931a-745fb40e1f60 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 12:49:04.072: INFO: >>> kubeConfig: /root/.kube/config I0817 12:49:04.128597 10 log.go:181] (0x40033cc2c0) (0x4006881b80) Create stream I0817 12:49:04.128935 10 log.go:181] (0x40033cc2c0) (0x4006881b80) Stream added, broadcasting: 1 I0817 12:49:04.133840 10 log.go:181] (0x40033cc2c0) Reply frame received for 1 I0817 12:49:04.133989 10 log.go:181] (0x40033cc2c0) (0x4006881c20) Create stream I0817 12:49:04.134065 10 log.go:181] (0x40033cc2c0) (0x4006881c20) Stream added, broadcasting: 3 I0817 12:49:04.135263 10 log.go:181] (0x40033cc2c0) Reply frame received for 3 I0817 12:49:04.135392 10 log.go:181] (0x40033cc2c0) (0x40024bebe0) Create stream I0817 12:49:04.135459 10 log.go:181] (0x40033cc2c0) (0x40024bebe0) Stream added, broadcasting: 5 I0817 12:49:04.136431 10 log.go:181] (0x40033cc2c0) Reply frame received for 5 I0817 12:49:04.203667 10 log.go:181] (0x40033cc2c0) Data frame received for 5 I0817 12:49:04.203809 10 log.go:181] (0x40024bebe0) (5) Data frame handling I0817 12:49:04.203979 10 log.go:181] (0x40033cc2c0) Data frame received for 3 I0817 12:49:04.204114 10 log.go:181] (0x4006881c20) (3) Data frame handling I0817 12:49:04.204250 10 log.go:181] (0x4006881c20) (3) Data frame sent I0817 12:49:04.204354 10 log.go:181] (0x40033cc2c0) Data frame received for 3 I0817 12:49:04.204449 10 log.go:181] (0x4006881c20) (3) Data frame handling I0817 12:49:04.204675 10 log.go:181] (0x40033cc2c0) Data frame received for 1 I0817 12:49:04.204827 10 log.go:181] (0x4006881b80) (1) Data frame handling I0817 12:49:04.204919 10 log.go:181] (0x4006881b80) (1) Data frame sent I0817 12:49:04.204985 10 log.go:181] (0x40033cc2c0) (0x4006881b80) Stream removed, broadcasting: 1 I0817 12:49:04.205072 10 log.go:181] (0x40033cc2c0) Go away received I0817 12:49:04.205438 10 log.go:181] (0x40033cc2c0) (0x4006881b80) Stream removed, broadcasting: 1 I0817 12:49:04.205559 10 log.go:181] (0x40033cc2c0) (0x4006881c20) Stream removed, broadcasting: 3 I0817 12:49:04.205634 10 log.go:181] (0x40033cc2c0) (0x40024bebe0) Stream removed, broadcasting: 5 Aug 17 12:49:04.205: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:49:04.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2706" for this suite. • [SLOW TEST:6.428 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":216,"skipped":3664,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:49:04.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 12:49:09.569: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 12:49:11.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265349, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265349, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265349, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265348, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 12:49:14.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265349, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265349, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265349, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733265348, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 12:49:17.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:49:17.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:49:19.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-341" for this suite. STEP: Destroying namespace "webhook-341-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.304 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":217,"skipped":3673,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:49:19.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:50:19.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8646" for this suite. • [SLOW TEST:60.174 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":218,"skipped":3673,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:50:19.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:50:19.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1113" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":219,"skipped":3680,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:50:19.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 12:50:19.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7f834dd-c231-4ef6-8aee-21e9a0988bbc" in namespace "projected-9912" to be "Succeeded or Failed" Aug 17 12:50:20.002: INFO: Pod "downwardapi-volume-c7f834dd-c231-4ef6-8aee-21e9a0988bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 25.654213ms Aug 17 12:50:22.201: INFO: Pod "downwardapi-volume-c7f834dd-c231-4ef6-8aee-21e9a0988bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224628996s Aug 17 12:50:24.208: INFO: Pod "downwardapi-volume-c7f834dd-c231-4ef6-8aee-21e9a0988bbc": Phase="Running", Reason="", readiness=true. Elapsed: 4.231882614s Aug 17 12:50:26.241: INFO: Pod "downwardapi-volume-c7f834dd-c231-4ef6-8aee-21e9a0988bbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.265260253s STEP: Saw pod success Aug 17 12:50:26.242: INFO: Pod "downwardapi-volume-c7f834dd-c231-4ef6-8aee-21e9a0988bbc" satisfied condition "Succeeded or Failed" Aug 17 12:50:26.246: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c7f834dd-c231-4ef6-8aee-21e9a0988bbc container client-container: STEP: delete the pod Aug 17 12:50:26.782: INFO: Waiting for pod downwardapi-volume-c7f834dd-c231-4ef6-8aee-21e9a0988bbc to disappear Aug 17 12:50:26.787: INFO: Pod downwardapi-volume-c7f834dd-c231-4ef6-8aee-21e9a0988bbc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:50:26.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9912" for this suite. • [SLOW TEST:7.252 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":220,"skipped":3691,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:50:27.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 17 12:50:34.606: INFO: Successfully updated pod "annotationupdate6f46f8c7-1e93-4071-9394-dda20f226a43" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:50:36.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1269" for this suite. • [SLOW TEST:9.538 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3691,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:50:36.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 17 12:50:40.994: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:50:41.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1301" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":222,"skipped":3716,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:50:41.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:50:52.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5209" for this suite. • [SLOW TEST:11.530 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":223,"skipped":3719,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:50:52.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4544 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 17 12:50:52.898: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 17 12:50:52.989: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:50:55.110: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 17 12:50:56.997: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:50:58.997: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:51:00.996: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:51:03.135: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:51:05.009: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:51:06.997: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:51:08.997: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:51:10.996: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:51:12.996: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:51:14.996: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 17 12:51:17.076: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 17 12:51:17.087: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 17 12:51:23.327: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.100 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4544 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 12:51:23.327: INFO: >>> kubeConfig: /root/.kube/config I0817 12:51:23.389137 10 log.go:181] (0x40039746e0) (0x4002b975e0) Create stream I0817 12:51:23.389309 10 log.go:181] (0x40039746e0) (0x4002b975e0) Stream added, broadcasting: 1 I0817 12:51:23.393791 10 log.go:181] (0x40039746e0) Reply frame received for 1 I0817 12:51:23.394102 10 log.go:181] (0x40039746e0) (0x4006880000) Create stream I0817 12:51:23.394262 10 log.go:181] (0x40039746e0) (0x4006880000) Stream added, broadcasting: 3 I0817 12:51:23.396120 10 log.go:181] (0x40039746e0) Reply frame received for 3 I0817 12:51:23.396264 10 log.go:181] (0x40039746e0) (0x4002b97860) Create stream I0817 12:51:23.396344 10 log.go:181] (0x40039746e0) (0x4002b97860) Stream added, broadcasting: 5 I0817 12:51:23.397841 10 log.go:181] (0x40039746e0) Reply frame received for 5 I0817 12:51:24.491836 10 log.go:181] (0x40039746e0) Data frame received for 5 I0817 12:51:24.492008 10 log.go:181] (0x4002b97860) (5) Data frame handling I0817 12:51:24.492114 10 log.go:181] (0x40039746e0) Data frame received for 3 I0817 12:51:24.492207 10 log.go:181] (0x4006880000) (3) Data frame handling I0817 12:51:24.492301 10 log.go:181] (0x4006880000) (3) Data frame sent I0817 12:51:24.492384 10 log.go:181] (0x40039746e0) Data frame received for 3 I0817 12:51:24.492469 10 log.go:181] (0x4006880000) (3) Data frame handling I0817 12:51:24.493975 10 log.go:181] (0x40039746e0) Data frame received for 1 I0817 12:51:24.494114 10 log.go:181] (0x4002b975e0) (1) Data frame handling I0817 12:51:24.494240 10 log.go:181] (0x4002b975e0) (1) Data frame sent I0817 12:51:24.494354 10 log.go:181] (0x40039746e0) (0x4002b975e0) Stream removed, broadcasting: 1 I0817 12:51:24.494485 10 log.go:181] (0x40039746e0) Go away received I0817 12:51:24.495119 10 log.go:181] (0x40039746e0) (0x4002b975e0) Stream removed, broadcasting: 1 I0817 12:51:24.495253 10 log.go:181] (0x40039746e0) (0x4006880000) Stream removed, broadcasting: 3 I0817 12:51:24.495391 10 log.go:181] (0x40039746e0) (0x4002b97860) Stream removed, broadcasting: 5 Aug 17 12:51:24.495: INFO: Found all expected endpoints: [netserver-0] Aug 17 12:51:24.548: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.60 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4544 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 12:51:24.548: INFO: >>> kubeConfig: /root/.kube/config I0817 12:51:24.605170 10 log.go:181] (0x4000e17b80) (0x4001792b40) Create stream I0817 12:51:24.605376 10 log.go:181] (0x4000e17b80) (0x4001792b40) Stream added, broadcasting: 1 I0817 12:51:24.613860 10 log.go:181] (0x4000e17b80) Reply frame received for 1 I0817 12:51:24.614093 10 log.go:181] (0x4000e17b80) (0x4004645c20) Create stream I0817 12:51:24.614219 10 log.go:181] (0x4000e17b80) (0x4004645c20) Stream added, broadcasting: 3 I0817 12:51:24.615883 10 log.go:181] (0x4000e17b80) Reply frame received for 3 I0817 12:51:24.616056 10 log.go:181] (0x4000e17b80) (0x4002b97900) Create stream I0817 12:51:24.616133 10 log.go:181] (0x4000e17b80) (0x4002b97900) Stream added, broadcasting: 5 I0817 12:51:24.617489 10 log.go:181] (0x4000e17b80) Reply frame received for 5 I0817 12:51:25.706182 10 log.go:181] (0x4000e17b80) Data frame received for 3 I0817 12:51:25.706334 10 log.go:181] (0x4004645c20) (3) Data frame handling I0817 12:51:25.706436 10 log.go:181] (0x4004645c20) (3) Data frame sent I0817 12:51:25.706508 10 log.go:181] (0x4000e17b80) Data frame received for 3 I0817 12:51:25.706569 10 log.go:181] (0x4004645c20) (3) Data frame handling I0817 12:51:25.706649 10 log.go:181] (0x4000e17b80) Data frame received for 5 I0817 12:51:25.706713 10 log.go:181] (0x4002b97900) (5) Data frame handling I0817 12:51:25.707718 10 log.go:181] (0x4000e17b80) Data frame received for 1 I0817 12:51:25.707797 10 log.go:181] (0x4001792b40) (1) Data frame handling I0817 12:51:25.707869 10 log.go:181] (0x4001792b40) (1) Data frame sent I0817 12:51:25.707948 10 log.go:181] (0x4000e17b80) (0x4001792b40) Stream removed, broadcasting: 1 I0817 12:51:25.708076 10 log.go:181] (0x4000e17b80) Go away received I0817 12:51:25.708490 10 log.go:181] (0x4000e17b80) (0x4001792b40) Stream removed, broadcasting: 1 I0817 12:51:25.708632 10 log.go:181] (0x4000e17b80) (0x4004645c20) Stream removed, broadcasting: 3 I0817 12:51:25.708912 10 log.go:181] (0x4000e17b80) (0x4002b97900) Stream removed, broadcasting: 5 Aug 17 12:51:25.709: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:51:25.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4544" for this suite. • [SLOW TEST:32.972 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":224,"skipped":3721,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:51:25.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 17 12:51:32.471: INFO: starting watch STEP: patching STEP: updating Aug 17 12:51:32.705: INFO: waiting for watch events with expected annotations Aug 17 12:51:32.706: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:51:34.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-346" for this suite. • [SLOW TEST:8.745 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":225,"skipped":3727,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:51:34.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Aug 17 12:51:35.089: INFO: Waiting up to 5m0s for pod "client-containers-359c090b-0ccf-4068-902e-8cf479b738f0" in namespace "containers-4513" to be "Succeeded or Failed" Aug 17 12:51:35.332: INFO: Pod "client-containers-359c090b-0ccf-4068-902e-8cf479b738f0": Phase="Pending", Reason="", readiness=false. Elapsed: 242.938199ms Aug 17 12:51:37.340: INFO: Pod "client-containers-359c090b-0ccf-4068-902e-8cf479b738f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250425678s Aug 17 12:51:39.362: INFO: Pod "client-containers-359c090b-0ccf-4068-902e-8cf479b738f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272588378s Aug 17 12:51:41.368: INFO: Pod "client-containers-359c090b-0ccf-4068-902e-8cf479b738f0": Phase="Running", Reason="", readiness=true. Elapsed: 6.278087168s Aug 17 12:51:43.374: INFO: Pod "client-containers-359c090b-0ccf-4068-902e-8cf479b738f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.284212646s STEP: Saw pod success Aug 17 12:51:43.374: INFO: Pod "client-containers-359c090b-0ccf-4068-902e-8cf479b738f0" satisfied condition "Succeeded or Failed" Aug 17 12:51:43.378: INFO: Trying to get logs from node latest-worker2 pod client-containers-359c090b-0ccf-4068-902e-8cf479b738f0 container test-container: STEP: delete the pod Aug 17 12:51:43.417: INFO: Waiting for pod client-containers-359c090b-0ccf-4068-902e-8cf479b738f0 to disappear Aug 17 12:51:43.432: INFO: Pod client-containers-359c090b-0ccf-4068-902e-8cf479b738f0 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:51:43.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4513" for this suite. • [SLOW TEST:8.975 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":226,"skipped":3753,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:51:43.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:51:44.218: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 17 12:51:49.244: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 17 12:51:51.276: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 17 12:51:55.689: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2226 /apis/apps/v1/namespaces/deployment-2226/deployments/test-cleanup-deployment f2bd803c-5fb4-4939-a1b1-6ea386517bc0 729468 1 2020-08-17 12:51:51 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-08-17 12:51:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-17 12:51:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004929a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-17 12:51:51 +0000 UTC,LastTransitionTime:2020-08-17 12:51:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5d446bdd47" has successfully progressed.,LastUpdateTime:2020-08-17 12:51:55 +0000 UTC,LastTransitionTime:2020-08-17 12:51:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 17 12:51:55.695: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-2226 /apis/apps/v1/namespaces/deployment-2226/replicasets/test-cleanup-deployment-5d446bdd47 194c898e-b19a-46dd-872a-5433edce48a4 729457 1 2020-08-17 12:51:51 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment f2bd803c-5fb4-4939-a1b1-6ea386517bc0 0x4004929e57 0x4004929e58}] [] [{kube-controller-manager Update apps/v1 2020-08-17 12:51:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2bd803c-5fb4-4939-a1b1-6ea386517bc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004929ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 17 12:51:55.701: INFO: Pod "test-cleanup-deployment-5d446bdd47-6kf6j" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-6kf6j test-cleanup-deployment-5d446bdd47- deployment-2226 /api/v1/namespaces/deployment-2226/pods/test-cleanup-deployment-5d446bdd47-6kf6j 749cdedc-de2e-4ebd-9630-79d1ac39fbd7 729456 0 2020-08-17 12:51:51 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 194c898e-b19a-46dd-872a-5433edce48a4 0x40032828a7 0x40032828a8}] [] [{kube-controller-manager Update v1 2020-08-17 12:51:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"194c898e-b19a-46dd-872a-5433edce48a4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 12:51:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-66bz5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-66bz5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-66bz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 12:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 12:51:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 12:51:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 12:51:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.63,StartTime:2020-08-17 12:51:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 12:51:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://9b7818df1be7b858fcdfe737d15de7abb2a81a4bf18d93f1a45b6032cb8dd8d1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:51:55.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2226" for this suite. • [SLOW TEST:12.259 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":227,"skipped":3812,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:51:55.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-2pvg STEP: Creating a pod to test atomic-volume-subpath Aug 17 12:51:55.854: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2pvg" in namespace "subpath-8982" to be "Succeeded or Failed" Aug 17 12:51:55.872: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Pending", Reason="", readiness=false. Elapsed: 18.441505ms Aug 17 12:51:57.880: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02615403s Aug 17 12:51:59.888: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 4.034397188s Aug 17 12:52:01.926: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 6.071585459s Aug 17 12:52:03.934: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 8.080463962s Aug 17 12:52:06.075: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 10.221197303s Aug 17 12:52:08.084: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 12.230137756s Aug 17 12:52:10.092: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 14.237580868s Aug 17 12:52:12.098: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 16.244313435s Aug 17 12:52:14.185: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 18.330548177s Aug 17 12:52:16.191: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 20.336862993s Aug 17 12:52:18.199: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 22.344670384s Aug 17 12:52:20.207: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Running", Reason="", readiness=true. Elapsed: 24.352576659s Aug 17 12:52:22.214: INFO: Pod "pod-subpath-test-secret-2pvg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.359494713s STEP: Saw pod success Aug 17 12:52:22.214: INFO: Pod "pod-subpath-test-secret-2pvg" satisfied condition "Succeeded or Failed" Aug 17 12:52:22.218: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-2pvg container test-container-subpath-secret-2pvg: STEP: delete the pod Aug 17 12:52:22.265: INFO: Waiting for pod pod-subpath-test-secret-2pvg to disappear Aug 17 12:52:22.289: INFO: Pod pod-subpath-test-secret-2pvg no longer exists STEP: Deleting pod pod-subpath-test-secret-2pvg Aug 17 12:52:22.289: INFO: Deleting pod "pod-subpath-test-secret-2pvg" in namespace "subpath-8982" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:52:22.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8982" for this suite. • [SLOW TEST:26.655 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":228,"skipped":3849,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:52:22.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:52:22.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1655" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":229,"skipped":3861,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:52:22.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:52:38.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6442" for this suite. • [SLOW TEST:16.387 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":230,"skipped":3862,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:52:38.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4381 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4381 I0817 12:52:40.101602 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4381, replica count: 2 I0817 12:52:43.152881 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 12:52:46.153712 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 12:52:46.154: INFO: Creating new exec pod Aug 17 12:52:51.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4381 execpodgwx9v -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 17 12:52:52.800: INFO: stderr: "I0817 12:52:52.679876 2794 log.go:181] (0x4000250000) (0x40006a8000) Create stream\nI0817 12:52:52.682654 2794 log.go:181] (0x4000250000) (0x40006a8000) Stream added, broadcasting: 1\nI0817 12:52:52.695078 2794 log.go:181] (0x4000250000) Reply frame received for 1\nI0817 12:52:52.696432 2794 log.go:181] (0x4000250000) (0x40006a80a0) Create stream\nI0817 12:52:52.696583 2794 log.go:181] (0x4000250000) (0x40006a80a0) Stream added, broadcasting: 3\nI0817 12:52:52.698626 2794 log.go:181] (0x4000250000) Reply frame received for 3\nI0817 12:52:52.699131 2794 log.go:181] (0x4000250000) (0x4000e8e000) Create stream\nI0817 12:52:52.699268 2794 log.go:181] (0x4000250000) (0x4000e8e000) Stream added, broadcasting: 5\nI0817 12:52:52.700981 2794 log.go:181] (0x4000250000) Reply frame received for 5\nI0817 12:52:52.776083 2794 log.go:181] (0x4000250000) Data frame received for 5\nI0817 12:52:52.776637 2794 log.go:181] (0x4000250000) Data frame received for 3\nI0817 12:52:52.776851 2794 log.go:181] (0x40006a80a0) (3) Data frame handling\nI0817 12:52:52.776974 2794 log.go:181] (0x4000250000) Data frame received for 1\nI0817 12:52:52.777123 2794 log.go:181] (0x40006a8000) (1) Data frame handling\nI0817 12:52:52.777244 2794 log.go:181] (0x4000e8e000) (5) Data frame handling\nI0817 12:52:52.779573 2794 log.go:181] (0x40006a8000) (1) Data frame sent\nI0817 12:52:52.779896 2794 log.go:181] (0x4000e8e000) (5) Data frame sent\nI0817 12:52:52.780067 2794 log.go:181] (0x4000250000) Data frame received for 5\n+ nc -zv -t -w 2 externalname-service 80\nI0817 12:52:52.780187 2794 log.go:181] (0x4000e8e000) (5) Data frame handling\nI0817 12:52:52.781806 2794 log.go:181] (0x4000e8e000) (5) Data frame sent\nI0817 12:52:52.781898 2794 log.go:181] (0x4000250000) Data frame received for 5\nI0817 12:52:52.781962 2794 log.go:181] (0x4000e8e000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0817 12:52:52.783485 2794 log.go:181] (0x4000250000) (0x40006a8000) Stream removed, broadcasting: 1\nI0817 12:52:52.785104 2794 log.go:181] (0x4000250000) Go away received\nI0817 12:52:52.789190 2794 log.go:181] (0x4000250000) (0x40006a8000) Stream removed, broadcasting: 1\nI0817 12:52:52.789780 2794 log.go:181] (0x4000250000) (0x40006a80a0) Stream removed, broadcasting: 3\nI0817 12:52:52.790085 2794 log.go:181] (0x4000250000) (0x4000e8e000) Stream removed, broadcasting: 5\n" Aug 17 12:52:52.802: INFO: stdout: "" Aug 17 12:52:52.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4381 execpodgwx9v -- /bin/sh -x -c nc -zv -t -w 2 10.101.173.81 80' Aug 17 12:52:54.437: INFO: stderr: "I0817 12:52:54.302420 2814 log.go:181] (0x4000a2a000) (0x400069bf40) Create stream\nI0817 12:52:54.304773 2814 log.go:181] (0x4000a2a000) (0x400069bf40) Stream added, broadcasting: 1\nI0817 12:52:54.316587 2814 log.go:181] (0x4000a2a000) Reply frame received for 1\nI0817 12:52:54.317252 2814 log.go:181] (0x4000a2a000) (0x400017c3c0) Create stream\nI0817 12:52:54.317323 2814 log.go:181] (0x4000a2a000) (0x400017c3c0) Stream added, broadcasting: 3\nI0817 12:52:54.318831 2814 log.go:181] (0x4000a2a000) Reply frame received for 3\nI0817 12:52:54.319234 2814 log.go:181] (0x4000a2a000) (0x4000b003c0) Create stream\nI0817 12:52:54.319331 2814 log.go:181] (0x4000a2a000) (0x4000b003c0) Stream added, broadcasting: 5\nI0817 12:52:54.320710 2814 log.go:181] (0x4000a2a000) Reply frame received for 5\nI0817 12:52:54.415533 2814 log.go:181] (0x4000a2a000) Data frame received for 3\nI0817 12:52:54.415855 2814 log.go:181] (0x4000a2a000) Data frame received for 1\nI0817 12:52:54.416310 2814 log.go:181] (0x4000a2a000) Data frame received for 5\nI0817 12:52:54.416457 2814 log.go:181] (0x4000b003c0) (5) Data frame handling\nI0817 12:52:54.416577 2814 log.go:181] (0x400069bf40) (1) Data frame handling\nI0817 12:52:54.416843 2814 log.go:181] (0x400017c3c0) (3) Data frame handling\n+ nc -zv -t -w 2 10.101.173.81 80\nConnection to 10.101.173.81 80 port [tcp/http] succeeded!\nI0817 12:52:54.419722 2814 log.go:181] (0x4000b003c0) (5) Data frame sent\nI0817 12:52:54.420380 2814 log.go:181] (0x4000a2a000) Data frame received for 5\nI0817 12:52:54.420510 2814 log.go:181] (0x4000b003c0) (5) Data frame handling\nI0817 12:52:54.421098 2814 log.go:181] (0x400069bf40) (1) Data frame sent\nI0817 12:52:54.422125 2814 log.go:181] (0x4000a2a000) (0x400069bf40) Stream removed, broadcasting: 1\nI0817 12:52:54.422783 2814 log.go:181] (0x4000a2a000) Go away received\nI0817 12:52:54.426014 2814 log.go:181] (0x4000a2a000) (0x400069bf40) Stream removed, broadcasting: 1\nI0817 12:52:54.426423 2814 log.go:181] (0x4000a2a000) (0x400017c3c0) Stream removed, broadcasting: 3\nI0817 12:52:54.426740 2814 log.go:181] (0x4000a2a000) (0x4000b003c0) Stream removed, broadcasting: 5\n" Aug 17 12:52:54.438: INFO: stdout: "" Aug 17 12:52:54.439: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:52:54.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4381" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:15.578 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":231,"skipped":3862,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:52:54.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 17 12:52:54.718: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:52:54.723: INFO: Number of nodes with available pods: 0 Aug 17 12:52:54.723: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:52:55.829: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:52:55.861: INFO: Number of nodes with available pods: 0 Aug 17 12:52:55.861: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:52:57.053: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:52:57.061: INFO: Number of nodes with available pods: 0 Aug 17 12:52:57.061: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:52:57.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:52:57.884: INFO: Number of nodes with available pods: 0 Aug 17 12:52:57.885: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:52:58.732: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:52:58.739: INFO: Number of nodes with available pods: 0 Aug 17 12:52:58.739: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:52:59.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:52:59.765: INFO: Number of nodes with available pods: 1 Aug 17 12:52:59.765: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:00.782: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:00.869: INFO: Number of nodes with available pods: 2 Aug 17 12:53:00.869: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 17 12:53:01.015: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:01.054: INFO: Number of nodes with available pods: 1 Aug 17 12:53:01.054: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:02.279: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:02.539: INFO: Number of nodes with available pods: 1 Aug 17 12:53:02.539: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:03.275: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:03.510: INFO: Number of nodes with available pods: 1 Aug 17 12:53:03.510: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:04.123: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:04.131: INFO: Number of nodes with available pods: 1 Aug 17 12:53:04.131: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:05.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:05.109: INFO: Number of nodes with available pods: 1 Aug 17 12:53:05.109: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:06.068: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:06.075: INFO: Number of nodes with available pods: 1 Aug 17 12:53:06.075: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:07.082: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:07.089: INFO: Number of nodes with available pods: 1 Aug 17 12:53:07.089: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:08.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:08.074: INFO: Number of nodes with available pods: 1 Aug 17 12:53:08.074: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:09.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:09.073: INFO: Number of nodes with available pods: 1 Aug 17 12:53:09.073: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:10.118: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:10.160: INFO: Number of nodes with available pods: 1 Aug 17 12:53:10.160: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:11.189: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:11.246: INFO: Number of nodes with available pods: 1 Aug 17 12:53:11.246: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:12.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:12.072: INFO: Number of nodes with available pods: 1 Aug 17 12:53:12.072: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:13.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:13.074: INFO: Number of nodes with available pods: 1 Aug 17 12:53:13.074: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:14.094: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:14.101: INFO: Number of nodes with available pods: 1 Aug 17 12:53:14.101: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:15.068: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:15.073: INFO: Number of nodes with available pods: 1 Aug 17 12:53:15.073: INFO: Node latest-worker is running more than one daemon pod Aug 17 12:53:16.550: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 12:53:16.558: INFO: Number of nodes with available pods: 2 Aug 17 12:53:16.558: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9586, will wait for the garbage collector to delete the pods Aug 17 12:53:16.626: INFO: Deleting DaemonSet.extensions daemon-set took: 7.670835ms Aug 17 12:53:17.227: INFO: Terminating DaemonSet.extensions daemon-set pods took: 601.434331ms Aug 17 12:53:30.533: INFO: Number of nodes with available pods: 0 Aug 17 12:53:30.533: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 12:53:30.555: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9586/daemonsets","resourceVersion":"729952"},"items":null} Aug 17 12:53:30.591: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9586/pods","resourceVersion":"729953"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:53:30.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9586" for this suite. • [SLOW TEST:36.055 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":232,"skipped":3862,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:53:30.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 17 12:53:37.047: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:53:37.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3961" for this suite. • [SLOW TEST:6.715 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":233,"skipped":3869,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:53:37.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 12:53:38.155: INFO: Creating ReplicaSet my-hostname-basic-5b135d86-1815-4a26-b926-ef7269c6e1fd Aug 17 12:53:38.232: INFO: Pod name my-hostname-basic-5b135d86-1815-4a26-b926-ef7269c6e1fd: Found 0 pods out of 1 Aug 17 12:53:43.238: INFO: Pod name my-hostname-basic-5b135d86-1815-4a26-b926-ef7269c6e1fd: Found 1 pods out of 1 Aug 17 12:53:43.239: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5b135d86-1815-4a26-b926-ef7269c6e1fd" is running Aug 17 12:53:43.244: INFO: Pod "my-hostname-basic-5b135d86-1815-4a26-b926-ef7269c6e1fd-h96qj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 12:53:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 12:53:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 12:53:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 12:53:38 +0000 UTC Reason: Message:}]) Aug 17 12:53:43.247: INFO: Trying to dial the pod Aug 17 12:53:48.367: INFO: Controller my-hostname-basic-5b135d86-1815-4a26-b926-ef7269c6e1fd: Got expected result from replica 1 [my-hostname-basic-5b135d86-1815-4a26-b926-ef7269c6e1fd-h96qj]: "my-hostname-basic-5b135d86-1815-4a26-b926-ef7269c6e1fd-h96qj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:53:48.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2637" for this suite. • [SLOW TEST:11.048 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":234,"skipped":3869,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:53:48.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 12:53:48.756: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530" in namespace "projected-5527" to be "Succeeded or Failed" Aug 17 12:53:48.953: INFO: Pod "downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530": Phase="Pending", Reason="", readiness=false. Elapsed: 196.383904ms Aug 17 12:53:50.959: INFO: Pod "downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202258611s Aug 17 12:53:53.643: INFO: Pod "downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530": Phase="Pending", Reason="", readiness=false. Elapsed: 4.886719753s Aug 17 12:53:55.796: INFO: Pod "downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530": Phase="Pending", Reason="", readiness=false. Elapsed: 7.039025154s Aug 17 12:53:57.937: INFO: Pod "downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530": Phase="Pending", Reason="", readiness=false. Elapsed: 9.180303789s Aug 17 12:53:59.945: INFO: Pod "downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.188323283s STEP: Saw pod success Aug 17 12:53:59.945: INFO: Pod "downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530" satisfied condition "Succeeded or Failed" Aug 17 12:54:00.210: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530 container client-container: STEP: delete the pod Aug 17 12:54:00.494: INFO: Waiting for pod downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530 to disappear Aug 17 12:54:00.790: INFO: Pod downwardapi-volume-e5083fb6-700b-499d-ad17-1bdb87345530 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:54:00.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5527" for this suite. • [SLOW TEST:12.448 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":235,"skipped":3894,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:54:00.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Aug 17 12:54:01.721: INFO: created test-pod-1 Aug 17 12:54:01.768: INFO: created test-pod-2 Aug 17 12:54:01.880: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:54:04.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8989" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":236,"skipped":3925,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:54:04.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:54:04.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7383" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":237,"skipped":3934,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:54:04.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Aug 17 12:54:17.192: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3085 PodName:var-expansion-00a1a32e-f08a-41b7-a0ea-8651c7861de3 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 12:54:17.192: INFO: >>> kubeConfig: /root/.kube/config I0817 12:54:17.251382 10 log.go:181] (0x40039748f0) (0x4001793680) Create stream I0817 12:54:17.251679 10 log.go:181] (0x40039748f0) (0x4001793680) Stream added, broadcasting: 1 I0817 12:54:17.255147 10 log.go:181] (0x40039748f0) Reply frame received for 1 I0817 12:54:17.255341 10 log.go:181] (0x40039748f0) (0x4001793720) Create stream I0817 12:54:17.255440 10 log.go:181] (0x40039748f0) (0x4001793720) Stream added, broadcasting: 3 I0817 12:54:17.256827 10 log.go:181] (0x40039748f0) Reply frame received for 3 I0817 12:54:17.256974 10 log.go:181] (0x40039748f0) (0x4001dcc500) Create stream I0817 12:54:17.257055 10 log.go:181] (0x40039748f0) (0x4001dcc500) Stream added, broadcasting: 5 I0817 12:54:17.258613 10 log.go:181] (0x40039748f0) Reply frame received for 5 I0817 12:54:17.327168 10 log.go:181] (0x40039748f0) Data frame received for 5 I0817 12:54:17.327396 10 log.go:181] (0x4001dcc500) (5) Data frame handling I0817 12:54:17.327544 10 log.go:181] (0x40039748f0) Data frame received for 3 I0817 12:54:17.327666 10 log.go:181] (0x4001793720) (3) Data frame handling I0817 12:54:17.328498 10 log.go:181] (0x40039748f0) Data frame received for 1 I0817 12:54:17.328666 10 log.go:181] (0x4001793680) (1) Data frame handling I0817 12:54:17.329013 10 log.go:181] (0x4001793680) (1) Data frame sent I0817 12:54:17.329145 10 log.go:181] (0x40039748f0) (0x4001793680) Stream removed, broadcasting: 1 I0817 12:54:17.329299 10 log.go:181] (0x40039748f0) Go away received I0817 12:54:17.329550 10 log.go:181] (0x40039748f0) (0x4001793680) Stream removed, broadcasting: 1 I0817 12:54:17.329684 10 log.go:181] (0x40039748f0) (0x4001793720) Stream removed, broadcasting: 3 I0817 12:54:17.329856 10 log.go:181] (0x40039748f0) (0x4001dcc500) Stream removed, broadcasting: 5 STEP: test for file in mounted path Aug 17 12:54:17.337: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3085 PodName:var-expansion-00a1a32e-f08a-41b7-a0ea-8651c7861de3 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 17 12:54:17.337: INFO: >>> kubeConfig: /root/.kube/config I0817 12:54:17.642517 10 log.go:181] (0x40033cc160) (0x4004644000) Create stream I0817 12:54:17.642698 10 log.go:181] (0x40033cc160) (0x4004644000) Stream added, broadcasting: 1 I0817 12:54:17.646476 10 log.go:181] (0x40033cc160) Reply frame received for 1 I0817 12:54:17.646677 10 log.go:181] (0x40033cc160) (0x40046440a0) Create stream I0817 12:54:17.646774 10 log.go:181] (0x40033cc160) (0x40046440a0) Stream added, broadcasting: 3 I0817 12:54:17.648217 10 log.go:181] (0x40033cc160) Reply frame received for 3 I0817 12:54:17.648354 10 log.go:181] (0x40033cc160) (0x40042a7a40) Create stream I0817 12:54:17.648424 10 log.go:181] (0x40033cc160) (0x40042a7a40) Stream added, broadcasting: 5 I0817 12:54:17.649953 10 log.go:181] (0x40033cc160) Reply frame received for 5 I0817 12:54:17.702637 10 log.go:181] (0x40033cc160) Data frame received for 3 I0817 12:54:17.702899 10 log.go:181] (0x40046440a0) (3) Data frame handling I0817 12:54:17.703092 10 log.go:181] (0x40033cc160) Data frame received for 5 I0817 12:54:17.703244 10 log.go:181] (0x40042a7a40) (5) Data frame handling I0817 12:54:17.703796 10 log.go:181] (0x40033cc160) Data frame received for 1 I0817 12:54:17.703922 10 log.go:181] (0x4004644000) (1) Data frame handling I0817 12:54:17.704038 10 log.go:181] (0x4004644000) (1) Data frame sent I0817 12:54:17.704156 10 log.go:181] (0x40033cc160) (0x4004644000) Stream removed, broadcasting: 1 I0817 12:54:17.704345 10 log.go:181] (0x40033cc160) Go away received I0817 12:54:17.704634 10 log.go:181] (0x40033cc160) (0x4004644000) Stream removed, broadcasting: 1 I0817 12:54:17.704867 10 log.go:181] (0x40033cc160) (0x40046440a0) Stream removed, broadcasting: 3 I0817 12:54:17.704978 10 log.go:181] (0x40033cc160) (0x40042a7a40) Stream removed, broadcasting: 5 STEP: updating the annotation value Aug 17 12:54:18.219: INFO: Successfully updated pod "var-expansion-00a1a32e-f08a-41b7-a0ea-8651c7861de3" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Aug 17 12:54:18.234: INFO: Deleting pod "var-expansion-00a1a32e-f08a-41b7-a0ea-8651c7861de3" in namespace "var-expansion-3085" Aug 17 12:54:18.252: INFO: Wait up to 5m0s for pod "var-expansion-00a1a32e-f08a-41b7-a0ea-8651c7861de3" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:54:54.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3085" for this suite. • [SLOW TEST:49.353 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":238,"skipped":3937,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:54:54.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Aug 17 12:54:54.944: INFO: Waiting up to 5m0s for pod "client-containers-4ee0d0c3-2edd-4b4e-a7fd-968d02d15c2e" in namespace "containers-2680" to be "Succeeded or Failed" Aug 17 12:54:55.186: INFO: Pod "client-containers-4ee0d0c3-2edd-4b4e-a7fd-968d02d15c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 242.313734ms Aug 17 12:54:57.404: INFO: Pod "client-containers-4ee0d0c3-2edd-4b4e-a7fd-968d02d15c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459548636s Aug 17 12:54:59.423: INFO: Pod "client-containers-4ee0d0c3-2edd-4b4e-a7fd-968d02d15c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479054113s Aug 17 12:55:01.430: INFO: Pod "client-containers-4ee0d0c3-2edd-4b4e-a7fd-968d02d15c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.486119619s STEP: Saw pod success Aug 17 12:55:01.430: INFO: Pod "client-containers-4ee0d0c3-2edd-4b4e-a7fd-968d02d15c2e" satisfied condition "Succeeded or Failed" Aug 17 12:55:01.435: INFO: Trying to get logs from node latest-worker2 pod client-containers-4ee0d0c3-2edd-4b4e-a7fd-968d02d15c2e container test-container: STEP: delete the pod Aug 17 12:55:01.654: INFO: Waiting for pod client-containers-4ee0d0c3-2edd-4b4e-a7fd-968d02d15c2e to disappear Aug 17 12:55:01.667: INFO: Pod client-containers-4ee0d0c3-2edd-4b4e-a7fd-968d02d15c2e no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:55:01.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2680" for this suite. • [SLOW TEST:7.435 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":239,"skipped":3937,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:55:01.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 17 12:55:08.519: INFO: Successfully updated pod "labelsupdate4351a45c-8b56-4419-9a31-45a7f23d814d" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:55:10.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6759" for this suite. • [SLOW TEST:8.804 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":240,"skipped":3951,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:55:10.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-e12c8915-d1d2-4e1b-b5b4-680bdde89ebd in namespace container-probe-3974 Aug 17 12:55:16.729: INFO: Started pod liveness-e12c8915-d1d2-4e1b-b5b4-680bdde89ebd in namespace container-probe-3974 STEP: checking the pod's current state and verifying that restartCount is present Aug 17 12:55:16.733: INFO: Initial restart count of pod liveness-e12c8915-d1d2-4e1b-b5b4-680bdde89ebd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:59:18.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3974" for this suite. • [SLOW TEST:248.620 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":241,"skipped":3953,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:59:19.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 12:59:20.404: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df" in namespace "downward-api-7100" to be "Succeeded or Failed" Aug 17 12:59:20.484: INFO: Pod "downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df": Phase="Pending", Reason="", readiness=false. Elapsed: 79.607513ms Aug 17 12:59:22.643: INFO: Pod "downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238838057s Aug 17 12:59:24.796: INFO: Pod "downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391976988s Aug 17 12:59:26.836: INFO: Pod "downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431401349s Aug 17 12:59:28.843: INFO: Pod "downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.439015762s STEP: Saw pod success Aug 17 12:59:28.844: INFO: Pod "downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df" satisfied condition "Succeeded or Failed" Aug 17 12:59:28.849: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df container client-container: STEP: delete the pod Aug 17 12:59:29.023: INFO: Waiting for pod downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df to disappear Aug 17 12:59:29.052: INFO: Pod downwardapi-volume-bd503859-e7ec-4d71-8cb3-e5e3665490df no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:59:29.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7100" for this suite. • [SLOW TEST:9.887 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":242,"skipped":3979,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:59:29.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 12:59:30.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f0cf801-15a0-4b26-9524-bbd067bfb74f" in namespace "projected-8666" to be "Succeeded or Failed" Aug 17 12:59:30.407: INFO: Pod "downwardapi-volume-5f0cf801-15a0-4b26-9524-bbd067bfb74f": Phase="Pending", Reason="", readiness=false. Elapsed: 226.320695ms Aug 17 12:59:32.598: INFO: Pod "downwardapi-volume-5f0cf801-15a0-4b26-9524-bbd067bfb74f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417998847s Aug 17 12:59:34.746: INFO: Pod "downwardapi-volume-5f0cf801-15a0-4b26-9524-bbd067bfb74f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.565829499s Aug 17 12:59:36.754: INFO: Pod "downwardapi-volume-5f0cf801-15a0-4b26-9524-bbd067bfb74f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.573578039s STEP: Saw pod success Aug 17 12:59:36.754: INFO: Pod "downwardapi-volume-5f0cf801-15a0-4b26-9524-bbd067bfb74f" satisfied condition "Succeeded or Failed" Aug 17 12:59:36.775: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-5f0cf801-15a0-4b26-9524-bbd067bfb74f container client-container: STEP: delete the pod Aug 17 12:59:36.949: INFO: Waiting for pod downwardapi-volume-5f0cf801-15a0-4b26-9524-bbd067bfb74f to disappear Aug 17 12:59:36.961: INFO: Pod downwardapi-volume-5f0cf801-15a0-4b26-9524-bbd067bfb74f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:59:36.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8666" for this suite. • [SLOW TEST:7.964 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":243,"skipped":4044,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:59:37.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 17 12:59:37.143: INFO: Waiting up to 5m0s for pod "pod-86fa710e-3bc9-4342-ba2d-c7fd45f2bad8" in namespace "emptydir-7308" to be "Succeeded or Failed" Aug 17 12:59:37.159: INFO: Pod "pod-86fa710e-3bc9-4342-ba2d-c7fd45f2bad8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.486945ms Aug 17 12:59:39.225: INFO: Pod "pod-86fa710e-3bc9-4342-ba2d-c7fd45f2bad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080957423s Aug 17 12:59:41.238: INFO: Pod "pod-86fa710e-3bc9-4342-ba2d-c7fd45f2bad8": Phase="Running", Reason="", readiness=true. Elapsed: 4.094097584s Aug 17 12:59:43.255: INFO: Pod "pod-86fa710e-3bc9-4342-ba2d-c7fd45f2bad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111273453s STEP: Saw pod success Aug 17 12:59:43.255: INFO: Pod "pod-86fa710e-3bc9-4342-ba2d-c7fd45f2bad8" satisfied condition "Succeeded or Failed" Aug 17 12:59:43.259: INFO: Trying to get logs from node latest-worker2 pod pod-86fa710e-3bc9-4342-ba2d-c7fd45f2bad8 container test-container: STEP: delete the pod Aug 17 12:59:43.310: INFO: Waiting for pod pod-86fa710e-3bc9-4342-ba2d-c7fd45f2bad8 to disappear Aug 17 12:59:43.338: INFO: Pod pod-86fa710e-3bc9-4342-ba2d-c7fd45f2bad8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:59:43.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7308" for this suite. • [SLOW TEST:6.320 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":244,"skipped":4054,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:59:43.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 12:59:43.462: INFO: Waiting up to 5m0s for pod "downwardapi-volume-298c577a-e5f6-4408-abbc-12f63846a1c5" in namespace "downward-api-2732" to be "Succeeded or Failed" Aug 17 12:59:43.507: INFO: Pod "downwardapi-volume-298c577a-e5f6-4408-abbc-12f63846a1c5": Phase="Pending", Reason="", readiness=false. Elapsed: 45.034317ms Aug 17 12:59:45.592: INFO: Pod "downwardapi-volume-298c577a-e5f6-4408-abbc-12f63846a1c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13051768s Aug 17 12:59:47.623: INFO: Pod "downwardapi-volume-298c577a-e5f6-4408-abbc-12f63846a1c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161328658s Aug 17 12:59:49.631: INFO: Pod "downwardapi-volume-298c577a-e5f6-4408-abbc-12f63846a1c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169387977s STEP: Saw pod success Aug 17 12:59:49.631: INFO: Pod "downwardapi-volume-298c577a-e5f6-4408-abbc-12f63846a1c5" satisfied condition "Succeeded or Failed" Aug 17 12:59:49.637: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-298c577a-e5f6-4408-abbc-12f63846a1c5 container client-container: STEP: delete the pod Aug 17 12:59:49.671: INFO: Waiting for pod downwardapi-volume-298c577a-e5f6-4408-abbc-12f63846a1c5 to disappear Aug 17 12:59:49.681: INFO: Pod downwardapi-volume-298c577a-e5f6-4408-abbc-12f63846a1c5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:59:49.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2732" for this suite. • [SLOW TEST:6.341 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":245,"skipped":4066,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:59:49.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-983354aa-bf4e-4073-b487-73eff142fb68 STEP: Creating a pod to test consume configMaps Aug 17 12:59:49.863: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f29a71a2-bfed-419d-a21f-8e5be716487f" in namespace "projected-2287" to be "Succeeded or Failed" Aug 17 12:59:49.866: INFO: Pod "pod-projected-configmaps-f29a71a2-bfed-419d-a21f-8e5be716487f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.42002ms Aug 17 12:59:52.233: INFO: Pod "pod-projected-configmaps-f29a71a2-bfed-419d-a21f-8e5be716487f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370048073s Aug 17 12:59:54.241: INFO: Pod "pod-projected-configmaps-f29a71a2-bfed-419d-a21f-8e5be716487f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378015719s Aug 17 12:59:56.247: INFO: Pod "pod-projected-configmaps-f29a71a2-bfed-419d-a21f-8e5be716487f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.384142372s STEP: Saw pod success Aug 17 12:59:56.247: INFO: Pod "pod-projected-configmaps-f29a71a2-bfed-419d-a21f-8e5be716487f" satisfied condition "Succeeded or Failed" Aug 17 12:59:56.252: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f29a71a2-bfed-419d-a21f-8e5be716487f container projected-configmap-volume-test: STEP: delete the pod Aug 17 12:59:56.958: INFO: Waiting for pod pod-projected-configmaps-f29a71a2-bfed-419d-a21f-8e5be716487f to disappear Aug 17 12:59:57.159: INFO: Pod pod-projected-configmaps-f29a71a2-bfed-419d-a21f-8e5be716487f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 12:59:57.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2287" for this suite. • [SLOW TEST:7.732 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":246,"skipped":4075,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 12:59:57.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 12:59:57.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f" in namespace "projected-5691" to be "Succeeded or Failed" Aug 17 12:59:58.005: INFO: Pod "downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f": Phase="Pending", Reason="", readiness=false. Elapsed: 141.802148ms Aug 17 13:00:00.527: INFO: Pod "downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664604777s Aug 17 13:00:02.556: INFO: Pod "downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.693542754s Aug 17 13:00:04.767: INFO: Pod "downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.904327692s Aug 17 13:00:06.776: INFO: Pod "downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.912914527s Aug 17 13:00:08.783: INFO: Pod "downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.920551222s STEP: Saw pod success Aug 17 13:00:08.784: INFO: Pod "downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f" satisfied condition "Succeeded or Failed" Aug 17 13:00:08.788: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f container client-container: STEP: delete the pod Aug 17 13:00:08.881: INFO: Waiting for pod downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f to disappear Aug 17 13:00:09.101: INFO: Pod downwardapi-volume-017b35ff-e79f-4bf9-90b1-4e66a6f6540f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:00:09.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5691" for this suite. • [SLOW TEST:11.686 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":247,"skipped":4107,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:00:09.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2ncjn in namespace proxy-330 I0817 13:00:10.365417 10 runners.go:190] Created replication controller with name: proxy-service-2ncjn, namespace: proxy-330, replica count: 1 I0817 13:00:11.416975 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:00:12.417570 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:00:13.418353 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:00:14.419016 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:00:15.419612 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:00:16.420226 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:00:17.421094 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0817 13:00:18.421661 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0817 13:00:19.422332 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0817 13:00:20.423105 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0817 13:00:21.423679 10 runners.go:190] proxy-service-2ncjn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 13:00:21.909: INFO: setup took 12.29036472s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 17 13:00:22.433: INFO: (0) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 522.234216ms) Aug 17 13:00:22.434: INFO: (0) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 521.600389ms) Aug 17 13:00:22.434: INFO: (0) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 523.174029ms) Aug 17 13:00:22.434: INFO: (0) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testtest (200; 522.116962ms) Aug 17 13:00:22.438: INFO: (0) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname2/proxy/: bar (200; 527.104547ms) Aug 17 13:00:22.438: INFO: (0) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname2/proxy/: bar (200; 527.689604ms) Aug 17 13:00:22.438: INFO: (0) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 527.496273ms) Aug 17 13:00:22.438: INFO: (0) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname1/proxy/: foo (200; 527.220653ms) Aug 17 13:00:22.439: INFO: (0) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 528.013752ms) Aug 17 13:00:22.439: INFO: (0) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname1/proxy/: foo (200; 528.530484ms) Aug 17 13:00:22.440: INFO: (0) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:462/proxy/: tls qux (200; 529.507049ms) Aug 17 13:00:22.440: INFO: (0) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname2/proxy/: tls qux (200; 529.800287ms) Aug 17 13:00:22.441: INFO: (0) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: t... (200; 66.468512ms) Aug 17 13:00:22.703: INFO: (1) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 260.054607ms) Aug 17 13:00:22.704: INFO: (1) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname2/proxy/: bar (200; 261.197538ms) Aug 17 13:00:22.704: INFO: (1) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 260.843804ms) Aug 17 13:00:22.705: INFO: (1) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 261.871202ms) Aug 17 13:00:22.705: INFO: (1) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4/proxy/: test (200; 262.100431ms) Aug 17 13:00:22.706: INFO: (1) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:460/proxy/: tls baz (200; 262.23261ms) Aug 17 13:00:22.706: INFO: (1) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testtest (200; 6.425634ms) Aug 17 13:00:22.714: INFO: (2) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testt... (200; 9.607499ms) Aug 17 13:00:22.717: INFO: (2) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: testt... (200; 7.932821ms) Aug 17 13:00:22.728: INFO: (3) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname1/proxy/: foo (200; 7.855954ms) Aug 17 13:00:22.728: INFO: (3) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: test (200; 9.784158ms) Aug 17 13:00:22.730: INFO: (3) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname2/proxy/: bar (200; 10.052447ms) Aug 17 13:00:22.734: INFO: (4) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 4.212056ms) Aug 17 13:00:22.734: INFO: (4) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testt... (200; 7.984263ms) Aug 17 13:00:22.738: INFO: (4) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname2/proxy/: tls qux (200; 8.45847ms) Aug 17 13:00:22.738: INFO: (4) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4/proxy/: test (200; 8.142082ms) Aug 17 13:00:22.738: INFO: (4) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname1/proxy/: foo (200; 8.590073ms) Aug 17 13:00:22.739: INFO: (4) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname2/proxy/: bar (200; 8.632761ms) Aug 17 13:00:22.744: INFO: (5) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 2.882032ms) Aug 17 13:00:22.749: INFO: (5) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testt... (200; 10.734255ms) Aug 17 13:00:22.751: INFO: (5) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname2/proxy/: bar (200; 12.055841ms) Aug 17 13:00:22.751: INFO: (5) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 11.215175ms) Aug 17 13:00:22.751: INFO: (5) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: test (200; 11.564429ms) Aug 17 13:00:22.752: INFO: (5) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 10.914741ms) Aug 17 13:00:22.755: INFO: (6) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 3.428731ms) Aug 17 13:00:22.755: INFO: (6) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 3.739915ms) Aug 17 13:00:22.756: INFO: (6) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:460/proxy/: tls baz (200; 3.901957ms) Aug 17 13:00:22.756: INFO: (6) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 3.411866ms) Aug 17 13:00:22.756: INFO: (6) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname1/proxy/: foo (200; 4.104994ms) Aug 17 13:00:22.756: INFO: (6) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 3.945718ms) Aug 17 13:00:22.756: INFO: (6) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testtest (200; 5.537852ms) Aug 17 13:00:22.758: INFO: (6) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname2/proxy/: tls qux (200; 5.484752ms) Aug 17 13:00:22.758: INFO: (6) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: testt... (200; 5.602021ms) Aug 17 13:00:22.764: INFO: (7) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4/proxy/: test (200; 5.77345ms) Aug 17 13:00:22.764: INFO: (7) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname1/proxy/: foo (200; 6.118115ms) Aug 17 13:00:22.764: INFO: (7) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 5.979843ms) Aug 17 13:00:22.765: INFO: (7) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname2/proxy/: tls qux (200; 6.162864ms) Aug 17 13:00:22.765: INFO: (7) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 6.185758ms) Aug 17 13:00:22.769: INFO: (8) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 3.795673ms) Aug 17 13:00:22.769: INFO: (8) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testtest (200; 7.938774ms) Aug 17 13:00:22.773: INFO: (8) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: testtest (200; 6.458196ms) Aug 17 13:00:22.781: INFO: (9) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 6.810445ms) Aug 17 13:00:22.782: INFO: (9) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname2/proxy/: tls qux (200; 6.593005ms) Aug 17 13:00:22.782: INFO: (9) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname2/proxy/: bar (200; 6.970915ms) Aug 17 13:00:22.782: INFO: (9) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname2/proxy/: bar (200; 6.980731ms) Aug 17 13:00:22.782: INFO: (9) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname1/proxy/: foo (200; 7.03441ms) Aug 17 13:00:22.782: INFO: (9) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname1/proxy/: foo (200; 7.234311ms) Aug 17 13:00:22.788: INFO: (10) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 5.46322ms) Aug 17 13:00:22.788: INFO: (10) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname1/proxy/: tls baz (200; 5.984619ms) Aug 17 13:00:22.788: INFO: (10) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname2/proxy/: bar (200; 6.08432ms) Aug 17 13:00:22.789: INFO: (10) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 6.139559ms) Aug 17 13:00:22.789: INFO: (10) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4/proxy/: test (200; 6.447263ms) Aug 17 13:00:22.789: INFO: (10) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname1/proxy/: foo (200; 6.336296ms) Aug 17 13:00:22.789: INFO: (10) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:462/proxy/: tls qux (200; 6.384945ms) Aug 17 13:00:22.789: INFO: (10) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname1/proxy/: foo (200; 6.514042ms) Aug 17 13:00:22.789: INFO: (10) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname2/proxy/: bar (200; 6.75849ms) Aug 17 13:00:22.789: INFO: (10) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 6.996458ms) Aug 17 13:00:22.789: INFO: (10) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:460/proxy/: tls baz (200; 7.08486ms) Aug 17 13:00:22.789: INFO: (10) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testt... (200; 3.755465ms) Aug 17 13:00:22.795: INFO: (11) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 4.760635ms) Aug 17 13:00:22.795: INFO: (11) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 5.283008ms) Aug 17 13:00:22.795: INFO: (11) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4/proxy/: test (200; 5.408243ms) Aug 17 13:00:22.796: INFO: (11) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname1/proxy/: foo (200; 6.239998ms) Aug 17 13:00:22.797: INFO: (11) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: testtest (200; 4.047135ms) Aug 17 13:00:22.803: INFO: (12) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 4.519647ms) Aug 17 13:00:22.803: INFO: (12) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: testt... (200; 5.845484ms) Aug 17 13:00:22.804: INFO: (12) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname2/proxy/: bar (200; 6.130851ms) Aug 17 13:00:22.804: INFO: (12) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 6.204031ms) Aug 17 13:00:22.805: INFO: (12) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname2/proxy/: bar (200; 6.593503ms) Aug 17 13:00:22.808: INFO: (13) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 2.793369ms) Aug 17 13:00:22.809: INFO: (13) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 3.607366ms) Aug 17 13:00:22.809: INFO: (13) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testtest (200; 5.097895ms) Aug 17 13:00:22.810: INFO: (13) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:460/proxy/: tls baz (200; 5.048768ms) Aug 17 13:00:22.810: INFO: (13) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 5.196427ms) Aug 17 13:00:22.811: INFO: (13) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: testtest (200; 6.08595ms) Aug 17 13:00:22.819: INFO: (14) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 7.141823ms) Aug 17 13:00:22.822: INFO: (15) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 3.219478ms) Aug 17 13:00:22.823: INFO: (15) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 2.937676ms) Aug 17 13:00:22.824: INFO: (15) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 3.682082ms) Aug 17 13:00:22.824: INFO: (15) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname2/proxy/: bar (200; 5.141604ms) Aug 17 13:00:22.824: INFO: (15) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4/proxy/: test (200; 3.712992ms) Aug 17 13:00:22.824: INFO: (15) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:460/proxy/: tls baz (200; 4.250404ms) Aug 17 13:00:22.824: INFO: (15) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 3.447551ms) Aug 17 13:00:22.825: INFO: (15) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname1/proxy/: foo (200; 5.305536ms) Aug 17 13:00:22.825: INFO: (15) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname2/proxy/: bar (200; 3.863749ms) Aug 17 13:00:22.825: INFO: (15) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: t... (200; 4.703786ms) Aug 17 13:00:22.825: INFO: (15) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname2/proxy/: tls qux (200; 6.247745ms) Aug 17 13:00:22.825: INFO: (15) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname1/proxy/: foo (200; 3.973638ms) Aug 17 13:00:22.825: INFO: (15) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testt... (200; 6.27667ms) Aug 17 13:00:22.832: INFO: (16) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname1/proxy/: foo (200; 6.425965ms) Aug 17 13:00:22.833: INFO: (16) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 6.519729ms) Aug 17 13:00:22.833: INFO: (16) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 6.729405ms) Aug 17 13:00:22.833: INFO: (16) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: testtest (200; 7.538833ms) Aug 17 13:00:22.833: INFO: (16) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname2/proxy/: bar (200; 7.22365ms) Aug 17 13:00:22.833: INFO: (16) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname1/proxy/: tls baz (200; 7.801338ms) Aug 17 13:00:22.833: INFO: (16) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname2/proxy/: tls qux (200; 7.49244ms) Aug 17 13:00:22.834: INFO: (16) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:460/proxy/: tls baz (200; 7.454112ms) Aug 17 13:00:22.834: INFO: (16) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 7.516716ms) Aug 17 13:00:22.834: INFO: (16) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname1/proxy/: foo (200; 7.893722ms) Aug 17 13:00:22.849: INFO: (17) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 15.221093ms) Aug 17 13:00:22.849: INFO: (17) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4/proxy/: test (200; 15.227796ms) Aug 17 13:00:22.849: INFO: (17) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname2/proxy/: bar (200; 15.006402ms) Aug 17 13:00:22.849: INFO: (17) /api/v1/namespaces/proxy-330/services/http:proxy-service-2ncjn:portname1/proxy/: foo (200; 15.11194ms) Aug 17 13:00:22.849: INFO: (17) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:160/proxy/: foo (200; 15.217947ms) Aug 17 13:00:22.849: INFO: (17) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 15.311367ms) Aug 17 13:00:22.849: INFO: (17) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname1/proxy/: tls baz (200; 15.279936ms) Aug 17 13:00:22.849: INFO: (17) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testtest (200; 6.858022ms) Aug 17 13:00:22.858: INFO: (18) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname1/proxy/: tls baz (200; 6.938673ms) Aug 17 13:00:22.859: INFO: (18) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:462/proxy/: tls qux (200; 6.736073ms) Aug 17 13:00:22.859: INFO: (18) /api/v1/namespaces/proxy-330/services/https:proxy-service-2ncjn:tlsportname2/proxy/: tls qux (200; 7.132912ms) Aug 17 13:00:22.859: INFO: (18) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 7.141421ms) Aug 17 13:00:22.859: INFO: (18) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname1/proxy/: foo (200; 7.095657ms) Aug 17 13:00:22.859: INFO: (18) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 7.293285ms) Aug 17 13:00:22.859: INFO: (18) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:1080/proxy/: testtesttest (200; 4.604321ms) Aug 17 13:00:22.866: INFO: (19) /api/v1/namespaces/proxy-330/pods/http:proxy-service-2ncjn-q7kp4:1080/proxy/: t... (200; 5.026769ms) Aug 17 13:00:22.866: INFO: (19) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:460/proxy/: tls baz (200; 5.225971ms) Aug 17 13:00:22.866: INFO: (19) /api/v1/namespaces/proxy-330/services/proxy-service-2ncjn:portname1/proxy/: foo (200; 5.312182ms) Aug 17 13:00:22.866: INFO: (19) /api/v1/namespaces/proxy-330/pods/proxy-service-2ncjn-q7kp4:162/proxy/: bar (200; 5.485457ms) Aug 17 13:00:22.866: INFO: (19) /api/v1/namespaces/proxy-330/pods/https:proxy-service-2ncjn-q7kp4:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 17 13:00:30.468: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7877 /api/v1/namespaces/watch-7877/configmaps/e2e-watch-test-resource-version cef05a7f-0fe6-4749-a54a-4d7d7450c882 731529 0 2020-08-17 13:00:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-17 13:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 13:00:30.471: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7877 /api/v1/namespaces/watch-7877/configmaps/e2e-watch-test-resource-version cef05a7f-0fe6-4749-a54a-4d7d7450c882 731530 0 2020-08-17 13:00:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-17 13:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:00:30.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7877" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":249,"skipped":4133,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:00:30.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0817 13:01:11.518464 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 17 13:02:13.545: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 17 13:02:13.546: INFO: Deleting pod "simpletest.rc-9w4fs" in namespace "gc-647" Aug 17 13:02:13.577: INFO: Deleting pod "simpletest.rc-9xmvh" in namespace "gc-647" Aug 17 13:02:13.672: INFO: Deleting pod "simpletest.rc-cr78s" in namespace "gc-647" Aug 17 13:02:13.720: INFO: Deleting pod "simpletest.rc-dxmhw" in namespace "gc-647" Aug 17 13:02:14.068: INFO: Deleting pod "simpletest.rc-hdvfg" in namespace "gc-647" Aug 17 13:02:14.344: INFO: Deleting pod "simpletest.rc-jxfnm" in namespace "gc-647" Aug 17 13:02:14.892: INFO: Deleting pod "simpletest.rc-mqtwt" in namespace "gc-647" Aug 17 13:02:15.420: INFO: Deleting pod "simpletest.rc-n2v6s" in namespace "gc-647" Aug 17 13:02:16.062: INFO: Deleting pod "simpletest.rc-rc7dr" in namespace "gc-647" Aug 17 13:02:16.295: INFO: Deleting pod "simpletest.rc-x8q98" in namespace "gc-647" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:02:16.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-647" for this suite. • [SLOW TEST:106.296 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":250,"skipped":4153,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} S ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:02:16.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2158 STEP: creating service affinity-nodeport in namespace services-2158 STEP: creating replication controller affinity-nodeport in namespace services-2158 I0817 13:02:19.343727 10 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-2158, replica count: 3 I0817 13:02:22.395172 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:02:25.395947 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:02:28.396849 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:02:31.397463 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 13:02:31.435: INFO: Creating new exec pod Aug 17 13:02:38.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2158 execpod-affinitypgrsb -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Aug 17 13:02:47.196: INFO: stderr: "I0817 13:02:47.106540 2834 log.go:181] (0x400084a000) (0x40002bc140) Create stream\nI0817 13:02:47.112652 2834 log.go:181] (0x400084a000) (0x40002bc140) Stream added, broadcasting: 1\nI0817 13:02:47.121757 2834 log.go:181] (0x400084a000) Reply frame received for 1\nI0817 13:02:47.122277 2834 log.go:181] (0x400084a000) (0x40002bc1e0) Create stream\nI0817 13:02:47.122331 2834 log.go:181] (0x400084a000) (0x40002bc1e0) Stream added, broadcasting: 3\nI0817 13:02:47.123510 2834 log.go:181] (0x400084a000) Reply frame received for 3\nI0817 13:02:47.123715 2834 log.go:181] (0x400084a000) (0x4000c175e0) Create stream\nI0817 13:02:47.123763 2834 log.go:181] (0x400084a000) (0x4000c175e0) Stream added, broadcasting: 5\nI0817 13:02:47.124805 2834 log.go:181] (0x400084a000) Reply frame received for 5\nI0817 13:02:47.174264 2834 log.go:181] (0x400084a000) Data frame received for 5\nI0817 13:02:47.174552 2834 log.go:181] (0x4000c175e0) (5) Data frame handling\nI0817 13:02:47.175284 2834 log.go:181] (0x4000c175e0) (5) Data frame sent\nI0817 13:02:47.175713 2834 log.go:181] (0x400084a000) Data frame received for 5\nI0817 13:02:47.175794 2834 log.go:181] (0x4000c175e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0817 13:02:47.177185 2834 log.go:181] (0x400084a000) Data frame received for 3\nI0817 13:02:47.177303 2834 log.go:181] (0x40002bc1e0) (3) Data frame handling\nI0817 13:02:47.177441 2834 log.go:181] (0x4000c175e0) (5) Data frame sent\nI0817 13:02:47.177571 2834 log.go:181] (0x400084a000) Data frame received for 5\nI0817 13:02:47.177702 2834 log.go:181] (0x4000c175e0) (5) Data frame handling\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0817 13:02:47.178301 2834 log.go:181] (0x400084a000) Data frame received for 1\nI0817 13:02:47.178448 2834 log.go:181] (0x40002bc140) (1) Data frame handling\nI0817 13:02:47.178558 2834 log.go:181] (0x40002bc140) (1) Data frame sent\nI0817 13:02:47.181366 2834 log.go:181] (0x400084a000) (0x40002bc140) Stream removed, broadcasting: 1\nI0817 13:02:47.182284 2834 log.go:181] (0x400084a000) Go away received\nI0817 13:02:47.185982 2834 log.go:181] (0x400084a000) (0x40002bc140) Stream removed, broadcasting: 1\nI0817 13:02:47.186221 2834 log.go:181] (0x400084a000) (0x40002bc1e0) Stream removed, broadcasting: 3\nI0817 13:02:47.186388 2834 log.go:181] (0x400084a000) (0x4000c175e0) Stream removed, broadcasting: 5\n" Aug 17 13:02:47.197: INFO: stdout: "" Aug 17 13:02:47.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2158 execpod-affinitypgrsb -- /bin/sh -x -c nc -zv -t -w 2 10.99.133.119 80' Aug 17 13:02:49.028: INFO: stderr: "I0817 13:02:48.921935 2855 log.go:181] (0x4000d129a0) (0x4000bd0780) Create stream\nI0817 13:02:48.926537 2855 log.go:181] (0x4000d129a0) (0x4000bd0780) Stream added, broadcasting: 1\nI0817 13:02:48.951774 2855 log.go:181] (0x4000d129a0) Reply frame received for 1\nI0817 13:02:48.952378 2855 log.go:181] (0x4000d129a0) (0x4000bd0000) Create stream\nI0817 13:02:48.952453 2855 log.go:181] (0x4000d129a0) (0x4000bd0000) Stream added, broadcasting: 3\nI0817 13:02:48.954119 2855 log.go:181] (0x4000d129a0) Reply frame received for 3\nI0817 13:02:48.954579 2855 log.go:181] (0x4000d129a0) (0x4000d0a000) Create stream\nI0817 13:02:48.954715 2855 log.go:181] (0x4000d129a0) (0x4000d0a000) Stream added, broadcasting: 5\nI0817 13:02:48.956591 2855 log.go:181] (0x4000d129a0) Reply frame received for 5\nI0817 13:02:49.006750 2855 log.go:181] (0x4000d129a0) Data frame received for 3\nI0817 13:02:49.006948 2855 log.go:181] (0x4000d129a0) Data frame received for 5\nI0817 13:02:49.007070 2855 log.go:181] (0x4000bd0000) (3) Data frame handling\nI0817 13:02:49.007274 2855 log.go:181] (0x4000d0a000) (5) Data frame handling\nI0817 13:02:49.007665 2855 log.go:181] (0x4000d129a0) Data frame received for 1\nI0817 13:02:49.007743 2855 log.go:181] (0x4000bd0780) (1) Data frame handling\nI0817 13:02:49.008849 2855 log.go:181] (0x4000d0a000) (5) Data frame sent\nI0817 13:02:49.009252 2855 log.go:181] (0x4000bd0780) (1) Data frame sent\nI0817 13:02:49.009520 2855 log.go:181] (0x4000d129a0) Data frame received for 5\nI0817 13:02:49.009594 2855 log.go:181] (0x4000d0a000) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.133.119 80\nConnection to 10.99.133.119 80 port [tcp/http] succeeded!\nI0817 13:02:49.010951 2855 log.go:181] (0x4000d129a0) (0x4000bd0780) Stream removed, broadcasting: 1\nI0817 13:02:49.012259 2855 log.go:181] (0x4000d129a0) Go away received\nI0817 13:02:49.015848 2855 log.go:181] (0x4000d129a0) (0x4000bd0780) Stream removed, broadcasting: 1\nI0817 13:02:49.016362 2855 log.go:181] (0x4000d129a0) (0x4000bd0000) Stream removed, broadcasting: 3\nI0817 13:02:49.017103 2855 log.go:181] (0x4000d129a0) (0x4000d0a000) Stream removed, broadcasting: 5\n" Aug 17 13:02:49.029: INFO: stdout: "" Aug 17 13:02:49.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2158 execpod-affinitypgrsb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30990' Aug 17 13:02:50.699: INFO: stderr: "I0817 13:02:50.591699 2875 log.go:181] (0x4000232370) (0x4000f18000) Create stream\nI0817 13:02:50.597303 2875 log.go:181] (0x4000232370) (0x4000f18000) Stream added, broadcasting: 1\nI0817 13:02:50.608670 2875 log.go:181] (0x4000232370) Reply frame received for 1\nI0817 13:02:50.609336 2875 log.go:181] (0x4000232370) (0x4000f180a0) Create stream\nI0817 13:02:50.609397 2875 log.go:181] (0x4000232370) (0x4000f180a0) Stream added, broadcasting: 3\nI0817 13:02:50.610735 2875 log.go:181] (0x4000232370) Reply frame received for 3\nI0817 13:02:50.610985 2875 log.go:181] (0x4000232370) (0x4000a8a820) Create stream\nI0817 13:02:50.611051 2875 log.go:181] (0x4000232370) (0x4000a8a820) Stream added, broadcasting: 5\nI0817 13:02:50.612174 2875 log.go:181] (0x4000232370) Reply frame received for 5\nI0817 13:02:50.678108 2875 log.go:181] (0x4000232370) Data frame received for 3\nI0817 13:02:50.678599 2875 log.go:181] (0x4000f180a0) (3) Data frame handling\nI0817 13:02:50.678881 2875 log.go:181] (0x4000232370) Data frame received for 1\nI0817 13:02:50.679052 2875 log.go:181] (0x4000f18000) (1) Data frame handling\nI0817 13:02:50.679942 2875 log.go:181] (0x4000232370) Data frame received for 5\nI0817 13:02:50.680145 2875 log.go:181] (0x4000a8a820) (5) Data frame handling\nI0817 13:02:50.682417 2875 log.go:181] (0x4000a8a820) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 30990\nConnection to 172.18.0.11 30990 port [tcp/30990] succeeded!\nI0817 13:02:50.682935 2875 log.go:181] (0x4000f18000) (1) Data frame sent\nI0817 13:02:50.683883 2875 log.go:181] (0x4000232370) Data frame received for 5\nI0817 13:02:50.683949 2875 log.go:181] (0x4000a8a820) (5) Data frame handling\nI0817 13:02:50.685691 2875 log.go:181] (0x4000232370) (0x4000f18000) Stream removed, broadcasting: 1\nI0817 13:02:50.686031 2875 log.go:181] (0x4000232370) Go away received\nI0817 13:02:50.689019 2875 log.go:181] (0x4000232370) (0x4000f18000) Stream removed, broadcasting: 1\nI0817 13:02:50.689217 2875 log.go:181] (0x4000232370) (0x4000f180a0) Stream removed, broadcasting: 3\nI0817 13:02:50.689371 2875 log.go:181] (0x4000232370) (0x4000a8a820) Stream removed, broadcasting: 5\n" Aug 17 13:02:50.700: INFO: stdout: "" Aug 17 13:02:50.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2158 execpod-affinitypgrsb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30990' Aug 17 13:02:52.905: INFO: stderr: "I0817 13:02:52.799972 2895 log.go:181] (0x400028b4a0) (0x4000c723c0) Create stream\nI0817 13:02:52.802228 2895 log.go:181] (0x400028b4a0) (0x4000c723c0) Stream added, broadcasting: 1\nI0817 13:02:52.810747 2895 log.go:181] (0x400028b4a0) Reply frame received for 1\nI0817 13:02:52.811725 2895 log.go:181] (0x400028b4a0) (0x4000c72460) Create stream\nI0817 13:02:52.811806 2895 log.go:181] (0x400028b4a0) (0x4000c72460) Stream added, broadcasting: 3\nI0817 13:02:52.813360 2895 log.go:181] (0x400028b4a0) Reply frame received for 3\nI0817 13:02:52.813713 2895 log.go:181] (0x400028b4a0) (0x40001e6000) Create stream\nI0817 13:02:52.813789 2895 log.go:181] (0x400028b4a0) (0x40001e6000) Stream added, broadcasting: 5\nI0817 13:02:52.815251 2895 log.go:181] (0x400028b4a0) Reply frame received for 5\nI0817 13:02:52.876116 2895 log.go:181] (0x400028b4a0) Data frame received for 5\nI0817 13:02:52.876909 2895 log.go:181] (0x400028b4a0) Data frame received for 3\nI0817 13:02:52.877042 2895 log.go:181] (0x4000c72460) (3) Data frame handling\nI0817 13:02:52.877123 2895 log.go:181] (0x40001e6000) (5) Data frame handling\nI0817 13:02:52.878267 2895 log.go:181] (0x400028b4a0) Data frame received for 1\nI0817 13:02:52.878351 2895 log.go:181] (0x4000c723c0) (1) Data frame handling\nI0817 13:02:52.878977 2895 log.go:181] (0x40001e6000) (5) Data frame sent\nI0817 13:02:52.879207 2895 log.go:181] (0x4000c723c0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 30990\nI0817 13:02:52.879569 2895 log.go:181] (0x400028b4a0) Data frame received for 5\nI0817 13:02:52.879635 2895 log.go:181] (0x40001e6000) (5) Data frame handling\nI0817 13:02:52.879717 2895 log.go:181] (0x40001e6000) (5) Data frame sent\nConnection to 172.18.0.14 30990 port [tcp/30990] succeeded!\nI0817 13:02:52.879779 2895 log.go:181] (0x400028b4a0) Data frame received for 5\nI0817 13:02:52.880111 2895 log.go:181] (0x400028b4a0) (0x4000c723c0) Stream removed, broadcasting: 1\nI0817 13:02:52.880903 2895 log.go:181] (0x40001e6000) (5) Data frame handling\nI0817 13:02:52.881598 2895 log.go:181] (0x400028b4a0) Go away received\nI0817 13:02:52.894140 2895 log.go:181] (0x400028b4a0) (0x4000c723c0) Stream removed, broadcasting: 1\nI0817 13:02:52.894452 2895 log.go:181] (0x400028b4a0) (0x4000c72460) Stream removed, broadcasting: 3\nI0817 13:02:52.894659 2895 log.go:181] (0x400028b4a0) (0x40001e6000) Stream removed, broadcasting: 5\n" Aug 17 13:02:52.905: INFO: stdout: "" Aug 17 13:02:52.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2158 execpod-affinitypgrsb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30990/ ; done' Aug 17 13:02:54.650: INFO: stderr: "I0817 13:02:54.445276 2915 log.go:181] (0x4000726000) (0x4000c5e000) Create stream\nI0817 13:02:54.448885 2915 log.go:181] (0x4000726000) (0x4000c5e000) Stream added, broadcasting: 1\nI0817 13:02:54.462451 2915 log.go:181] (0x4000726000) Reply frame received for 1\nI0817 13:02:54.463646 2915 log.go:181] (0x4000726000) (0x40007a8000) Create stream\nI0817 13:02:54.463761 2915 log.go:181] (0x4000726000) (0x40007a8000) Stream added, broadcasting: 3\nI0817 13:02:54.465525 2915 log.go:181] (0x4000726000) Reply frame received for 3\nI0817 13:02:54.465929 2915 log.go:181] (0x4000726000) (0x40008946e0) Create stream\nI0817 13:02:54.466022 2915 log.go:181] (0x4000726000) (0x40008946e0) Stream added, broadcasting: 5\nI0817 13:02:54.467299 2915 log.go:181] (0x4000726000) Reply frame received for 5\nI0817 13:02:54.539761 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.540139 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.540417 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.540612 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.541303 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.541531 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.542707 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.542781 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.542843 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.542911 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.542992 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.543063 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.543135 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.543200 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.543276 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.550728 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.550845 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.550967 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.551131 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.551278 2915 log.go:181] (0x40008946e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.551445 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.551610 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.551845 2915 log.go:181] (0x40008946e0) (5) Data frame sent\nI0817 13:02:54.551980 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.555137 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.555256 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.555359 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.555576 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.555709 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.555824 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.555916 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.556021 2915 log.go:181] (0x40008946e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.556193 2915 log.go:181] (0x40008946e0) (5) Data frame sent\nI0817 13:02:54.561525 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.561661 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.561779 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.562568 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.562726 2915 log.go:181] (0x40008946e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.562852 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.563014 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.563183 2915 log.go:181] (0x40008946e0) (5) Data frame sent\nI0817 13:02:54.563293 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.568594 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.568706 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.568915 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.569056 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.569159 2915 log.go:181] (0x40008946e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.569274 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.569410 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.569542 2915 log.go:181] (0x40008946e0) (5) Data frame sent\nI0817 13:02:54.569707 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.572963 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.573063 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.573167 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.573616 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.573711 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.573855 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.573996 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.574061 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.574137 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.578923 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.579014 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.579085 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.579146 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.579283 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.579362 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.584466 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.584582 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.584842 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.585377 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.585460 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.585567 2915 log.go:181] (0x40008946e0) (5) Data frame sent\nI0817 13:02:54.585807 2915 log.go:181] (0x4000726000) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.585901 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.585974 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.590558 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.590676 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.590737 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.590807 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.590873 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.590958 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.591036 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.591094 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.591158 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.594297 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.594372 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.594492 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.595278 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.595368 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.595432 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/I0817 13:02:54.595493 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.595541 2915 log.go:181] (0x40008946e0) (5) Data frame handling\n\nI0817 13:02:54.595637 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.595770 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.595952 2915 log.go:181] (0x40008946e0) (5) Data frame sent\nI0817 13:02:54.596069 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.599758 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.599874 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.600005 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.601341 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.601426 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.601493 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.601555 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.601657 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.601745 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.605734 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.605826 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.605953 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.606215 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.606286 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.606353 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.606415 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.606473 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.606548 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.610789 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.610856 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.610938 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.611483 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.611561 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.611628 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.611685 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.611739 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.611807 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.615780 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.615838 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.615912 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.616470 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.616572 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.616632 2915 log.go:181] (0x40008946e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.616683 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.616791 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.616860 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.623023 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.623107 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.623170 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.623245 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.623375 2915 log.go:181] (0x40008946e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30990/\nI0817 13:02:54.623480 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.623594 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.623679 2915 log.go:181] (0x40008946e0) (5) Data frame sent\nI0817 13:02:54.623773 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.627828 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.627953 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.628088 2915 log.go:181] (0x40007a8000) (3) Data frame sent\nI0817 13:02:54.629056 2915 log.go:181] (0x4000726000) Data frame received for 3\nI0817 13:02:54.629204 2915 log.go:181] (0x40007a8000) (3) Data frame handling\nI0817 13:02:54.629416 2915 log.go:181] (0x4000726000) Data frame received for 5\nI0817 13:02:54.629580 2915 log.go:181] (0x40008946e0) (5) Data frame handling\nI0817 13:02:54.631001 2915 log.go:181] (0x4000726000) Data frame received for 1\nI0817 13:02:54.631135 2915 log.go:181] (0x4000c5e000) (1) Data frame handling\nI0817 13:02:54.631241 2915 log.go:181] (0x4000c5e000) (1) Data frame sent\nI0817 13:02:54.632002 2915 log.go:181] (0x4000726000) (0x4000c5e000) Stream removed, broadcasting: 1\nI0817 13:02:54.636305 2915 log.go:181] (0x4000726000) Go away received\nI0817 13:02:54.636579 2915 log.go:181] (0x4000726000) (0x4000c5e000) Stream removed, broadcasting: 1\nI0817 13:02:54.637803 2915 log.go:181] (0x4000726000) (0x40007a8000) Stream removed, broadcasting: 3\nI0817 13:02:54.641122 2915 log.go:181] Streams opened: 1, map[spdy.StreamId]*spdystream.Stream{0x5:(*spdystream.Stream)(0x40008946e0)}\nI0817 13:02:54.641598 2915 log.go:181] (0x4000726000) (0x40008946e0) Stream removed, broadcasting: 5\n" Aug 17 13:02:54.655: INFO: stdout: "\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2\naffinity-nodeport-xhpv2" Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.655: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.656: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.656: INFO: Received response from host: affinity-nodeport-xhpv2 Aug 17 13:02:54.656: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-2158, will wait for the garbage collector to delete the pods Aug 17 13:02:54.798: INFO: Deleting ReplicationController affinity-nodeport took: 7.617543ms Aug 17 13:02:55.498: INFO: Terminating ReplicationController affinity-nodeport pods took: 700.691626ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:03:11.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2158" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:54.942 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":251,"skipped":4154,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:03:11.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 13:03:16.435: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 13:03:18.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266196, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266196, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266196, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266195, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:03:21.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266196, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266196, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266196, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266195, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:03:22.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266196, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266196, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266196, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266195, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 13:03:25.901: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 13:03:25.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4572-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:03:27.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9149" for this suite. STEP: Destroying namespace "webhook-9149-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.317 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":252,"skipped":4181,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:03:28.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Aug 17 13:03:29.319: INFO: Waiting up to 5m0s for pod "var-expansion-29cd457a-b557-45bb-90aa-63a278b8dc0b" in namespace "var-expansion-6311" to be "Succeeded or Failed" Aug 17 13:03:29.499: INFO: Pod "var-expansion-29cd457a-b557-45bb-90aa-63a278b8dc0b": Phase="Pending", Reason="", readiness=false. Elapsed: 179.706961ms Aug 17 13:03:31.592: INFO: Pod "var-expansion-29cd457a-b557-45bb-90aa-63a278b8dc0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272608279s Aug 17 13:03:33.698: INFO: Pod "var-expansion-29cd457a-b557-45bb-90aa-63a278b8dc0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378093905s Aug 17 13:03:35.709: INFO: Pod "var-expansion-29cd457a-b557-45bb-90aa-63a278b8dc0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.389831205s STEP: Saw pod success Aug 17 13:03:35.710: INFO: Pod "var-expansion-29cd457a-b557-45bb-90aa-63a278b8dc0b" satisfied condition "Succeeded or Failed" Aug 17 13:03:35.715: INFO: Trying to get logs from node latest-worker pod var-expansion-29cd457a-b557-45bb-90aa-63a278b8dc0b container dapi-container: STEP: delete the pod Aug 17 13:03:35.790: INFO: Waiting for pod var-expansion-29cd457a-b557-45bb-90aa-63a278b8dc0b to disappear Aug 17 13:03:35.852: INFO: Pod var-expansion-29cd457a-b557-45bb-90aa-63a278b8dc0b no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:03:35.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6311" for this suite. • [SLOW TEST:7.827 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":253,"skipped":4270,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:03:35.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2009.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2009.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 13:03:44.385: INFO: DNS probes using dns-test-7aeb0a9c-f883-42a6-99d5-5af47b8faaae succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2009.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2009.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 13:03:56.762: INFO: File wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local from pod dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 13:03:56.767: INFO: File jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local from pod dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 13:03:56.768: INFO: Lookups using dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d failed for: [wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local] Aug 17 13:04:01.810: INFO: File wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local from pod dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 13:04:01.814: INFO: File jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local from pod dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 13:04:01.814: INFO: Lookups using dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d failed for: [wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local] Aug 17 13:04:06.781: INFO: File wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local from pod dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 13:04:06.797: INFO: File jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local from pod dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 13:04:06.798: INFO: Lookups using dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d failed for: [wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local] Aug 17 13:04:11.775: INFO: File wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local from pod dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 13:04:11.780: INFO: File jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local from pod dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 17 13:04:11.780: INFO: Lookups using dns-2009/dns-test-242581a5-3c11-41d4-960d-fc5492716a8d failed for: [wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local] Aug 17 13:04:18.117: INFO: DNS probes using dns-test-242581a5-3c11-41d4-960d-fc5492716a8d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2009.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2009.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2009.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2009.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 13:04:27.136: INFO: DNS probes using dns-test-fb643066-3600-46f4-9f95-6d6a788efd3a succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:04:27.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2009" for this suite. • [SLOW TEST:51.385 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":254,"skipped":4286,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:04:27.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:04:33.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1217" for this suite. • [SLOW TEST:6.517 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":255,"skipped":4303,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:04:33.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 17 13:04:38.056: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:04:38.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8941" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":256,"skipped":4303,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:04:38.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:04:38.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3230" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":257,"skipped":4318,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:04:38.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 17 13:04:38.561: INFO: starting watch STEP: patching STEP: updating Aug 17 13:04:38.578: INFO: waiting for watch events with expected annotations Aug 17 13:04:38.580: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:04:38.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-6178" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":258,"skipped":4336,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:04:38.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6591.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6591.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 13:04:48.927: INFO: DNS probes using dns-6591/dns-test-81c83a0f-02b8-419b-8679-c87efb2fa0e9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:04:49.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6591" for this suite. • [SLOW TEST:11.143 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":259,"skipped":4336,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:04:49.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5563 Aug 17 13:04:55.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5563 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 17 13:04:57.961: INFO: stderr: "I0817 13:04:57.850462 2935 log.go:181] (0x40001631e0) (0x4000a6c280) Create stream\nI0817 13:04:57.853449 2935 log.go:181] (0x40001631e0) (0x4000a6c280) Stream added, broadcasting: 1\nI0817 13:04:57.862745 2935 log.go:181] (0x40001631e0) Reply frame received for 1\nI0817 13:04:57.863271 2935 log.go:181] (0x40001631e0) (0x4000b1a000) Create stream\nI0817 13:04:57.863327 2935 log.go:181] (0x40001631e0) (0x4000b1a000) Stream added, broadcasting: 3\nI0817 13:04:57.865123 2935 log.go:181] (0x40001631e0) Reply frame received for 3\nI0817 13:04:57.865611 2935 log.go:181] (0x40001631e0) (0x4000a6c320) Create stream\nI0817 13:04:57.865701 2935 log.go:181] (0x40001631e0) (0x4000a6c320) Stream added, broadcasting: 5\nI0817 13:04:57.867115 2935 log.go:181] (0x40001631e0) Reply frame received for 5\nI0817 13:04:57.936925 2935 log.go:181] (0x40001631e0) Data frame received for 5\nI0817 13:04:57.937187 2935 log.go:181] (0x4000a6c320) (5) Data frame handling\nI0817 13:04:57.937771 2935 log.go:181] (0x4000a6c320) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0817 13:04:57.939831 2935 log.go:181] (0x40001631e0) Data frame received for 3\nI0817 13:04:57.939945 2935 log.go:181] (0x4000b1a000) (3) Data frame handling\nI0817 13:04:57.940072 2935 log.go:181] (0x4000b1a000) (3) Data frame sent\nI0817 13:04:57.940546 2935 log.go:181] (0x40001631e0) Data frame received for 5\nI0817 13:04:57.940621 2935 log.go:181] (0x4000a6c320) (5) Data frame handling\nI0817 13:04:57.941187 2935 log.go:181] (0x40001631e0) Data frame received for 3\nI0817 13:04:57.941337 2935 log.go:181] (0x4000b1a000) (3) Data frame handling\nI0817 13:04:57.942684 2935 log.go:181] (0x40001631e0) Data frame received for 1\nI0817 13:04:57.942747 2935 log.go:181] (0x4000a6c280) (1) Data frame handling\nI0817 13:04:57.942813 2935 log.go:181] (0x4000a6c280) (1) Data frame sent\nI0817 13:04:57.944412 2935 log.go:181] (0x40001631e0) (0x4000a6c280) Stream removed, broadcasting: 1\nI0817 13:04:57.945636 2935 log.go:181] (0x40001631e0) Go away received\nI0817 13:04:57.950212 2935 log.go:181] (0x40001631e0) (0x4000a6c280) Stream removed, broadcasting: 1\nI0817 13:04:57.950554 2935 log.go:181] (0x40001631e0) (0x4000b1a000) Stream removed, broadcasting: 3\nI0817 13:04:57.950778 2935 log.go:181] (0x40001631e0) (0x4000a6c320) Stream removed, broadcasting: 5\n" Aug 17 13:04:57.962: INFO: stdout: "iptables" Aug 17 13:04:57.962: INFO: proxyMode: iptables Aug 17 13:04:57.999: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 13:04:58.082: INFO: Pod kube-proxy-mode-detector still exists Aug 17 13:05:00.083: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 13:05:00.089: INFO: Pod kube-proxy-mode-detector still exists Aug 17 13:05:02.083: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 13:05:02.089: INFO: Pod kube-proxy-mode-detector still exists Aug 17 13:05:04.083: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 13:05:04.091: INFO: Pod kube-proxy-mode-detector still exists Aug 17 13:05:06.083: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 13:05:06.091: INFO: Pod kube-proxy-mode-detector still exists Aug 17 13:05:08.083: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 13:05:08.090: INFO: Pod kube-proxy-mode-detector still exists Aug 17 13:05:10.083: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 17 13:05:10.267: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-5563 STEP: creating replication controller affinity-nodeport-timeout in namespace services-5563 I0817 13:05:10.451261 10 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5563, replica count: 3 I0817 13:05:13.502539 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0817 13:05:16.503382 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 17 13:05:16.524: INFO: Creating new exec pod Aug 17 13:05:23.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5563 execpod-affinityzjzxt -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Aug 17 13:05:25.214: INFO: stderr: "I0817 13:05:25.076647 2956 log.go:181] (0x400003a0b0) (0x4000838000) Create stream\nI0817 13:05:25.080494 2956 log.go:181] (0x400003a0b0) (0x4000838000) Stream added, broadcasting: 1\nI0817 13:05:25.093763 2956 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0817 13:05:25.094500 2956 log.go:181] (0x400003a0b0) (0x4000d90000) Create stream\nI0817 13:05:25.094573 2956 log.go:181] (0x400003a0b0) (0x4000d90000) Stream added, broadcasting: 3\nI0817 13:05:25.096025 2956 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0817 13:05:25.096286 2956 log.go:181] (0x400003a0b0) (0x40008380a0) Create stream\nI0817 13:05:25.096344 2956 log.go:181] (0x400003a0b0) (0x40008380a0) Stream added, broadcasting: 5\nI0817 13:05:25.097731 2956 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0817 13:05:25.194224 2956 log.go:181] (0x400003a0b0) Data frame received for 5\nI0817 13:05:25.194752 2956 log.go:181] (0x400003a0b0) Data frame received for 3\nI0817 13:05:25.194942 2956 log.go:181] (0x4000d90000) (3) Data frame handling\nI0817 13:05:25.195244 2956 log.go:181] (0x40008380a0) (5) Data frame handling\nI0817 13:05:25.197065 2956 log.go:181] (0x40008380a0) (5) Data frame sent\nI0817 13:05:25.197366 2956 log.go:181] (0x400003a0b0) Data frame received for 5\nI0817 13:05:25.197503 2956 log.go:181] (0x40008380a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0817 13:05:25.199596 2956 log.go:181] (0x400003a0b0) Data frame received for 1\nI0817 13:05:25.199698 2956 log.go:181] (0x4000838000) (1) Data frame handling\nI0817 13:05:25.199827 2956 log.go:181] (0x4000838000) (1) Data frame sent\nI0817 13:05:25.201020 2956 log.go:181] (0x400003a0b0) (0x4000838000) Stream removed, broadcasting: 1\nI0817 13:05:25.202130 2956 log.go:181] (0x400003a0b0) Go away received\nI0817 13:05:25.205940 2956 log.go:181] (0x400003a0b0) (0x4000838000) Stream removed, broadcasting: 1\nI0817 13:05:25.206266 2956 log.go:181] (0x400003a0b0) (0x4000d90000) Stream removed, broadcasting: 3\nI0817 13:05:25.206485 2956 log.go:181] (0x400003a0b0) (0x40008380a0) Stream removed, broadcasting: 5\n" Aug 17 13:05:25.215: INFO: stdout: "" Aug 17 13:05:25.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5563 execpod-affinityzjzxt -- /bin/sh -x -c nc -zv -t -w 2 10.100.239.199 80' Aug 17 13:05:26.807: INFO: stderr: "I0817 13:05:26.700172 2976 log.go:181] (0x4000546c60) (0x40006d2320) Create stream\nI0817 13:05:26.705968 2976 log.go:181] (0x4000546c60) (0x40006d2320) Stream added, broadcasting: 1\nI0817 13:05:26.721931 2976 log.go:181] (0x4000546c60) Reply frame received for 1\nI0817 13:05:26.727860 2976 log.go:181] (0x4000546c60) (0x400099c780) Create stream\nI0817 13:05:26.727958 2976 log.go:181] (0x4000546c60) (0x400099c780) Stream added, broadcasting: 3\nI0817 13:05:26.729374 2976 log.go:181] (0x4000546c60) Reply frame received for 3\nI0817 13:05:26.729677 2976 log.go:181] (0x4000546c60) (0x400099dc20) Create stream\nI0817 13:05:26.729737 2976 log.go:181] (0x4000546c60) (0x400099dc20) Stream added, broadcasting: 5\nI0817 13:05:26.730761 2976 log.go:181] (0x4000546c60) Reply frame received for 5\nI0817 13:05:26.787539 2976 log.go:181] (0x4000546c60) Data frame received for 5\nI0817 13:05:26.787719 2976 log.go:181] (0x4000546c60) Data frame received for 3\nI0817 13:05:26.787964 2976 log.go:181] (0x4000546c60) Data frame received for 1\nI0817 13:05:26.788452 2976 log.go:181] (0x400099c780) (3) Data frame handling\nI0817 13:05:26.789125 2976 log.go:181] (0x40006d2320) (1) Data frame handling\nI0817 13:05:26.789669 2976 log.go:181] (0x400099dc20) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.239.199 80\nConnection to 10.100.239.199 80 port [tcp/http] succeeded!\nI0817 13:05:26.792057 2976 log.go:181] (0x400099dc20) (5) Data frame sent\nI0817 13:05:26.792190 2976 log.go:181] (0x40006d2320) (1) Data frame sent\nI0817 13:05:26.793077 2976 log.go:181] (0x4000546c60) Data frame received for 5\nI0817 13:05:26.793171 2976 log.go:181] (0x400099dc20) (5) Data frame handling\nI0817 13:05:26.793740 2976 log.go:181] (0x4000546c60) (0x40006d2320) Stream removed, broadcasting: 1\nI0817 13:05:26.797060 2976 log.go:181] (0x4000546c60) (0x40006d2320) Stream removed, broadcasting: 1\nI0817 13:05:26.797399 2976 log.go:181] (0x4000546c60) (0x400099c780) Stream removed, broadcasting: 3\nI0817 13:05:26.798767 2976 log.go:181] (0x4000546c60) (0x400099dc20) Stream removed, broadcasting: 5\n" Aug 17 13:05:26.808: INFO: stdout: "" Aug 17 13:05:26.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5563 execpod-affinityzjzxt -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31001' Aug 17 13:05:28.634: INFO: stderr: "I0817 13:05:28.477955 2997 log.go:181] (0x4000248000) (0x4000c6c000) Create stream\nI0817 13:05:28.483205 2997 log.go:181] (0x4000248000) (0x4000c6c000) Stream added, broadcasting: 1\nI0817 13:05:28.501050 2997 log.go:181] (0x4000248000) Reply frame received for 1\nI0817 13:05:28.502450 2997 log.go:181] (0x4000248000) (0x4000c6c0a0) Create stream\nI0817 13:05:28.502584 2997 log.go:181] (0x4000248000) (0x4000c6c0a0) Stream added, broadcasting: 3\nI0817 13:05:28.504811 2997 log.go:181] (0x4000248000) Reply frame received for 3\nI0817 13:05:28.505084 2997 log.go:181] (0x4000248000) (0x400091c000) Create stream\nI0817 13:05:28.505154 2997 log.go:181] (0x4000248000) (0x400091c000) Stream added, broadcasting: 5\nI0817 13:05:28.506530 2997 log.go:181] (0x4000248000) Reply frame received for 5\nI0817 13:05:28.608946 2997 log.go:181] (0x4000248000) Data frame received for 3\nI0817 13:05:28.609286 2997 log.go:181] (0x4000248000) Data frame received for 5\nI0817 13:05:28.609492 2997 log.go:181] (0x400091c000) (5) Data frame handling\nI0817 13:05:28.609701 2997 log.go:181] (0x4000c6c0a0) (3) Data frame handling\nI0817 13:05:28.610072 2997 log.go:181] (0x4000248000) Data frame received for 1\nI0817 13:05:28.610198 2997 log.go:181] (0x4000c6c000) (1) Data frame handling\nI0817 13:05:28.612872 2997 log.go:181] (0x4000c6c000) (1) Data frame sent\nI0817 13:05:28.613123 2997 log.go:181] (0x400091c000) (5) Data frame sent\nI0817 13:05:28.613246 2997 log.go:181] (0x4000248000) Data frame received for 5\nI0817 13:05:28.613351 2997 log.go:181] (0x400091c000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31001\nConnection to 172.18.0.11 31001 port [tcp/31001] succeeded!\nI0817 13:05:28.614001 2997 log.go:181] (0x4000248000) (0x4000c6c000) Stream removed, broadcasting: 1\nI0817 13:05:28.617815 2997 log.go:181] (0x4000248000) Go away received\nI0817 13:05:28.622187 2997 log.go:181] (0x4000248000) (0x4000c6c000) Stream removed, broadcasting: 1\nI0817 13:05:28.622862 2997 log.go:181] (0x4000248000) (0x4000c6c0a0) Stream removed, broadcasting: 3\nI0817 13:05:28.623771 2997 log.go:181] (0x4000248000) (0x400091c000) Stream removed, broadcasting: 5\n" Aug 17 13:05:28.635: INFO: stdout: "" Aug 17 13:05:28.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5563 execpod-affinityzjzxt -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31001' Aug 17 13:05:30.290: INFO: stderr: "I0817 13:05:30.194912 3017 log.go:181] (0x40001f53f0) (0x40006e6280) Create stream\nI0817 13:05:30.202005 3017 log.go:181] (0x40001f53f0) (0x40006e6280) Stream added, broadcasting: 1\nI0817 13:05:30.218499 3017 log.go:181] (0x40001f53f0) Reply frame received for 1\nI0817 13:05:30.219279 3017 log.go:181] (0x40001f53f0) (0x400081efa0) Create stream\nI0817 13:05:30.219367 3017 log.go:181] (0x40001f53f0) (0x400081efa0) Stream added, broadcasting: 3\nI0817 13:05:30.221040 3017 log.go:181] (0x40001f53f0) Reply frame received for 3\nI0817 13:05:30.221268 3017 log.go:181] (0x40001f53f0) (0x400037c0a0) Create stream\nI0817 13:05:30.221324 3017 log.go:181] (0x40001f53f0) (0x400037c0a0) Stream added, broadcasting: 5\nI0817 13:05:30.222369 3017 log.go:181] (0x40001f53f0) Reply frame received for 5\nI0817 13:05:30.271017 3017 log.go:181] (0x40001f53f0) Data frame received for 3\nI0817 13:05:30.271489 3017 log.go:181] (0x40001f53f0) Data frame received for 1\nI0817 13:05:30.271615 3017 log.go:181] (0x400081efa0) (3) Data frame handling\nI0817 13:05:30.271733 3017 log.go:181] (0x40006e6280) (1) Data frame handling\nI0817 13:05:30.272601 3017 log.go:181] (0x40001f53f0) Data frame received for 5\nI0817 13:05:30.272696 3017 log.go:181] (0x400037c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31001\nConnection to 172.18.0.14 31001 port [tcp/31001] succeeded!\nI0817 13:05:30.273557 3017 log.go:181] (0x40006e6280) (1) Data frame sent\nI0817 13:05:30.273649 3017 log.go:181] (0x400037c0a0) (5) Data frame sent\nI0817 13:05:30.274221 3017 log.go:181] (0x40001f53f0) Data frame received for 5\nI0817 13:05:30.274285 3017 log.go:181] (0x400037c0a0) (5) Data frame handling\nI0817 13:05:30.276894 3017 log.go:181] (0x40001f53f0) (0x40006e6280) Stream removed, broadcasting: 1\nI0817 13:05:30.277694 3017 log.go:181] (0x40001f53f0) Go away received\nI0817 13:05:30.280064 3017 log.go:181] (0x40001f53f0) (0x40006e6280) Stream removed, broadcasting: 1\nI0817 13:05:30.280399 3017 log.go:181] (0x40001f53f0) (0x400081efa0) Stream removed, broadcasting: 3\nI0817 13:05:30.280549 3017 log.go:181] (0x40001f53f0) (0x400037c0a0) Stream removed, broadcasting: 5\n" Aug 17 13:05:30.291: INFO: stdout: "" Aug 17 13:05:30.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5563 execpod-affinityzjzxt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31001/ ; done' Aug 17 13:05:32.019: INFO: stderr: "I0817 13:05:31.826726 3037 log.go:181] (0x4000ec7130) (0x4000d02960) Create stream\nI0817 13:05:31.830412 3037 log.go:181] (0x4000ec7130) (0x4000d02960) Stream added, broadcasting: 1\nI0817 13:05:31.851611 3037 log.go:181] (0x4000ec7130) Reply frame received for 1\nI0817 13:05:31.852211 3037 log.go:181] (0x4000ec7130) (0x4000d02000) Create stream\nI0817 13:05:31.852302 3037 log.go:181] (0x4000ec7130) (0x4000d02000) Stream added, broadcasting: 3\nI0817 13:05:31.853527 3037 log.go:181] (0x4000ec7130) Reply frame received for 3\nI0817 13:05:31.853820 3037 log.go:181] (0x4000ec7130) (0x40001a9540) Create stream\nI0817 13:05:31.853892 3037 log.go:181] (0x4000ec7130) (0x40001a9540) Stream added, broadcasting: 5\nI0817 13:05:31.854879 3037 log.go:181] (0x4000ec7130) Reply frame received for 5\nI0817 13:05:31.914795 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.915368 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.915656 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:31.915952 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.916915 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.917422 3037 log.go:181] (0x40001a9540) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.921483 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.921597 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.921699 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.921825 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.921916 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.922103 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.922311 3037 log.go:181] (0x40001a9540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.922395 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.922508 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:31.926293 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.926373 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.926464 3037 log.go:181] (0x40001a9540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.926534 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.926625 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.926699 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.926759 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.926888 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:31.927044 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.931660 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.931758 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.931859 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.931959 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.932052 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.932168 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.932282 3037 log.go:181] (0x40001a9540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.932372 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.932464 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:31.936053 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.936119 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.936206 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.937333 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.937440 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:31.937547 3037 log.go:181] (0x40001a9540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.937631 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.937710 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.937805 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.941162 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.941257 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.941377 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.941660 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.941778 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.941897 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.942037 3037 log.go:181] (0x40001a9540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.942145 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.942240 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:31.944688 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.944936 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.945076 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.945202 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.945333 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.945465 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.945600 3037 log.go:181] (0x40001a9540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.945686 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.945800 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:31.951184 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.951319 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.951457 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.951734 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.951811 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.951917 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.952043 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:31.952170 3037 log.go:181] (0x40001a9540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.952283 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.956930 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.956994 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.957063 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.957791 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.957869 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:31.957932 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:31.957989 3037 log.go:181] (0x4000ec7130) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.958043 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.958103 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.962473 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.962594 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.962719 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.962994 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.963092 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:31.963172 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:31.963245 3037 log.go:181] (0x4000ec7130) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.963328 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.963667 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.967765 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.967889 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.968028 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.968321 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.968445 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.968574 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.968664 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.968824 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:31.968920 3037 log.go:181] (0x40001a9540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.973141 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.973325 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.973559 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.974186 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.974333 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.974492 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.974634 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.974748 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:31.974890 3037 log.go:181] (0x40001a9540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.978078 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.978194 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.978351 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.978607 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.978722 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.978833 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.978946 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:31.979082 3037 log.go:181] (0x40001a9540) (5) Data frame sent\n+ echo\n+ curl -q -sI0817 13:05:31.979260 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.979384 3037 log.go:181] (0x40001a9540) (5) Data frame handling\n --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.979504 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.979646 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:31.988846 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.988983 3037 log.go:181] (0x40001a9540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.989095 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.989243 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.989379 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.989531 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:31.994114 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.994201 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:31.994268 3037 log.go:181] (0x40001a9540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.994346 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.994405 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.994476 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.994540 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.994624 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.994710 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.997028 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.997161 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.997281 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.997548 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:31.997629 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:31.997688 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:31.997774 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:31.997903 3037 log.go:181] (0x40001a9540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:31.998033 3037 log.go:181] (0x40001a9540) (5) Data frame sent\nI0817 13:05:32.000677 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:32.000835 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:32.000909 3037 log.go:181] (0x4000d02000) (3) Data frame sent\nI0817 13:05:32.001389 3037 log.go:181] (0x4000ec7130) Data frame received for 5\nI0817 13:05:32.001486 3037 log.go:181] (0x40001a9540) (5) Data frame handling\nI0817 13:05:32.001688 3037 log.go:181] (0x4000ec7130) Data frame received for 3\nI0817 13:05:32.001803 3037 log.go:181] (0x4000d02000) (3) Data frame handling\nI0817 13:05:32.002966 3037 log.go:181] (0x4000ec7130) Data frame received for 1\nI0817 13:05:32.003038 3037 log.go:181] (0x4000d02960) (1) Data frame handling\nI0817 13:05:32.003113 3037 log.go:181] (0x4000d02960) (1) Data frame sent\nI0817 13:05:32.003798 3037 log.go:181] (0x4000ec7130) (0x4000d02960) Stream removed, broadcasting: 1\nI0817 13:05:32.006333 3037 log.go:181] (0x4000ec7130) Go away received\nI0817 13:05:32.009013 3037 log.go:181] (0x4000ec7130) (0x4000d02960) Stream removed, broadcasting: 1\nI0817 13:05:32.009490 3037 log.go:181] (0x4000ec7130) (0x4000d02000) Stream removed, broadcasting: 3\nI0817 13:05:32.009886 3037 log.go:181] (0x4000ec7130) (0x40001a9540) Stream removed, broadcasting: 5\n" Aug 17 13:05:32.024: INFO: stdout: "\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9\naffinity-nodeport-timeout-wflc9" Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Received response from host: affinity-nodeport-timeout-wflc9 Aug 17 13:05:32.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5563 execpod-affinityzjzxt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:31001/' Aug 17 13:05:33.585: INFO: stderr: "I0817 13:05:33.475655 3057 log.go:181] (0x40006c0000) (0x40009b8000) Create stream\nI0817 13:05:33.481961 3057 log.go:181] (0x40006c0000) (0x40009b8000) Stream added, broadcasting: 1\nI0817 13:05:33.496435 3057 log.go:181] (0x40006c0000) Reply frame received for 1\nI0817 13:05:33.497180 3057 log.go:181] (0x40006c0000) (0x40009b80a0) Create stream\nI0817 13:05:33.497253 3057 log.go:181] (0x40006c0000) (0x40009b80a0) Stream added, broadcasting: 3\nI0817 13:05:33.499216 3057 log.go:181] (0x40006c0000) Reply frame received for 3\nI0817 13:05:33.499792 3057 log.go:181] (0x40006c0000) (0x4000465400) Create stream\nI0817 13:05:33.499964 3057 log.go:181] (0x40006c0000) (0x4000465400) Stream added, broadcasting: 5\nI0817 13:05:33.501836 3057 log.go:181] (0x40006c0000) Reply frame received for 5\nI0817 13:05:33.565080 3057 log.go:181] (0x40006c0000) Data frame received for 5\nI0817 13:05:33.565353 3057 log.go:181] (0x4000465400) (5) Data frame handling\nI0817 13:05:33.565906 3057 log.go:181] (0x4000465400) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:33.568467 3057 log.go:181] (0x40006c0000) Data frame received for 3\nI0817 13:05:33.568615 3057 log.go:181] (0x40009b80a0) (3) Data frame handling\nI0817 13:05:33.568718 3057 log.go:181] (0x40009b80a0) (3) Data frame sent\nI0817 13:05:33.569971 3057 log.go:181] (0x40006c0000) Data frame received for 3\nI0817 13:05:33.570136 3057 log.go:181] (0x40009b80a0) (3) Data frame handling\nI0817 13:05:33.570268 3057 log.go:181] (0x40006c0000) Data frame received for 5\nI0817 13:05:33.570434 3057 log.go:181] (0x4000465400) (5) Data frame handling\nI0817 13:05:33.571404 3057 log.go:181] (0x40006c0000) Data frame received for 1\nI0817 13:05:33.571474 3057 log.go:181] (0x40009b8000) (1) Data frame handling\nI0817 13:05:33.571553 3057 log.go:181] (0x40009b8000) (1) Data frame sent\nI0817 13:05:33.572517 3057 log.go:181] (0x40006c0000) (0x40009b8000) Stream removed, broadcasting: 1\nI0817 13:05:33.574960 3057 log.go:181] (0x40006c0000) Go away received\nI0817 13:05:33.576517 3057 log.go:181] (0x40006c0000) (0x40009b8000) Stream removed, broadcasting: 1\nI0817 13:05:33.577070 3057 log.go:181] (0x40006c0000) (0x40009b80a0) Stream removed, broadcasting: 3\nI0817 13:05:33.577443 3057 log.go:181] (0x40006c0000) (0x4000465400) Stream removed, broadcasting: 5\n" Aug 17 13:05:33.586: INFO: stdout: "affinity-nodeport-timeout-wflc9" Aug 17 13:05:48.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5563 execpod-affinityzjzxt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:31001/' Aug 17 13:05:50.147: INFO: stderr: "I0817 13:05:50.041136 3077 log.go:181] (0x40000e9130) (0x40007ac460) Create stream\nI0817 13:05:50.043901 3077 log.go:181] (0x40000e9130) (0x40007ac460) Stream added, broadcasting: 1\nI0817 13:05:50.055886 3077 log.go:181] (0x40000e9130) Reply frame received for 1\nI0817 13:05:50.057055 3077 log.go:181] (0x40000e9130) (0x4000d24dc0) Create stream\nI0817 13:05:50.057175 3077 log.go:181] (0x40000e9130) (0x4000d24dc0) Stream added, broadcasting: 3\nI0817 13:05:50.058889 3077 log.go:181] (0x40000e9130) Reply frame received for 3\nI0817 13:05:50.059223 3077 log.go:181] (0x40000e9130) (0x4000d24e60) Create stream\nI0817 13:05:50.059318 3077 log.go:181] (0x40000e9130) (0x4000d24e60) Stream added, broadcasting: 5\nI0817 13:05:50.060401 3077 log.go:181] (0x40000e9130) Reply frame received for 5\nI0817 13:05:50.123674 3077 log.go:181] (0x40000e9130) Data frame received for 5\nI0817 13:05:50.123975 3077 log.go:181] (0x4000d24e60) (5) Data frame handling\nI0817 13:05:50.124986 3077 log.go:181] (0x4000d24e60) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:05:50.126800 3077 log.go:181] (0x40000e9130) Data frame received for 3\nI0817 13:05:50.126986 3077 log.go:181] (0x4000d24dc0) (3) Data frame handling\nI0817 13:05:50.127175 3077 log.go:181] (0x4000d24dc0) (3) Data frame sent\nI0817 13:05:50.129622 3077 log.go:181] (0x40000e9130) Data frame received for 3\nI0817 13:05:50.129770 3077 log.go:181] (0x4000d24dc0) (3) Data frame handling\nI0817 13:05:50.130239 3077 log.go:181] (0x40000e9130) Data frame received for 5\nI0817 13:05:50.130378 3077 log.go:181] (0x4000d24e60) (5) Data frame handling\nI0817 13:05:50.131469 3077 log.go:181] (0x40000e9130) Data frame received for 1\nI0817 13:05:50.131565 3077 log.go:181] (0x40007ac460) (1) Data frame handling\nI0817 13:05:50.131650 3077 log.go:181] (0x40007ac460) (1) Data frame sent\nI0817 13:05:50.133692 3077 log.go:181] (0x40000e9130) (0x40007ac460) Stream removed, broadcasting: 1\nI0817 13:05:50.135700 3077 log.go:181] (0x40000e9130) Go away received\nI0817 13:05:50.138071 3077 log.go:181] (0x40000e9130) (0x40007ac460) Stream removed, broadcasting: 1\nI0817 13:05:50.138373 3077 log.go:181] (0x40000e9130) (0x4000d24dc0) Stream removed, broadcasting: 3\nI0817 13:05:50.138830 3077 log.go:181] (0x40000e9130) (0x4000d24e60) Stream removed, broadcasting: 5\n" Aug 17 13:05:50.147: INFO: stdout: "affinity-nodeport-timeout-wflc9" Aug 17 13:06:05.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5563 execpod-affinityzjzxt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:31001/' Aug 17 13:06:06.854: INFO: stderr: "I0817 13:06:06.710878 3098 log.go:181] (0x40008b4840) (0x40007a8320) Create stream\nI0817 13:06:06.714022 3098 log.go:181] (0x40008b4840) (0x40007a8320) Stream added, broadcasting: 1\nI0817 13:06:06.736042 3098 log.go:181] (0x40008b4840) Reply frame received for 1\nI0817 13:06:06.736711 3098 log.go:181] (0x40008b4840) (0x4000518000) Create stream\nI0817 13:06:06.736869 3098 log.go:181] (0x40008b4840) (0x4000518000) Stream added, broadcasting: 3\nI0817 13:06:06.738378 3098 log.go:181] (0x40008b4840) Reply frame received for 3\nI0817 13:06:06.738663 3098 log.go:181] (0x40008b4840) (0x40007a8000) Create stream\nI0817 13:06:06.738756 3098 log.go:181] (0x40008b4840) (0x40007a8000) Stream added, broadcasting: 5\nI0817 13:06:06.740194 3098 log.go:181] (0x40008b4840) Reply frame received for 5\nI0817 13:06:06.814194 3098 log.go:181] (0x40008b4840) Data frame received for 5\nI0817 13:06:06.814410 3098 log.go:181] (0x40007a8000) (5) Data frame handling\nI0817 13:06:06.814912 3098 log.go:181] (0x40007a8000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31001/\nI0817 13:06:06.831595 3098 log.go:181] (0x40008b4840) Data frame received for 5\nI0817 13:06:06.831751 3098 log.go:181] (0x40007a8000) (5) Data frame handling\nI0817 13:06:06.837105 3098 log.go:181] (0x40008b4840) Data frame received for 3\nI0817 13:06:06.837229 3098 log.go:181] (0x4000518000) (3) Data frame handling\nI0817 13:06:06.837359 3098 log.go:181] (0x4000518000) (3) Data frame sent\nI0817 13:06:06.837443 3098 log.go:181] (0x40008b4840) Data frame received for 3\nI0817 13:06:06.837517 3098 log.go:181] (0x4000518000) (3) Data frame handling\nI0817 13:06:06.838251 3098 log.go:181] (0x40008b4840) Data frame received for 1\nI0817 13:06:06.838341 3098 log.go:181] (0x40007a8320) (1) Data frame handling\nI0817 13:06:06.838430 3098 log.go:181] (0x40007a8320) (1) Data frame sent\nI0817 13:06:06.839107 3098 log.go:181] (0x40008b4840) (0x40007a8320) Stream removed, broadcasting: 1\nI0817 13:06:06.841156 3098 log.go:181] (0x40008b4840) Go away received\nI0817 13:06:06.843379 3098 log.go:181] (0x40008b4840) (0x40007a8320) Stream removed, broadcasting: 1\nI0817 13:06:06.843738 3098 log.go:181] (0x40008b4840) (0x4000518000) Stream removed, broadcasting: 3\nI0817 13:06:06.844155 3098 log.go:181] (0x40008b4840) (0x40007a8000) Stream removed, broadcasting: 5\n" Aug 17 13:06:06.855: INFO: stdout: "affinity-nodeport-timeout-g9wxn" Aug 17 13:06:06.855: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-5563, will wait for the garbage collector to delete the pods Aug 17 13:06:06.953: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 7.720123ms Aug 17 13:06:07.354: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.630391ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:06:20.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5563" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:90.980 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":260,"skipped":4366,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:06:20.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7167 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7167 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7167 Aug 17 13:06:21.304: INFO: Found 0 stateful pods, waiting for 1 Aug 17 13:06:31.313: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 17 13:06:31.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 13:06:32.985: INFO: stderr: "I0817 13:06:32.848171 3119 log.go:181] (0x40006b8000) (0x40008901e0) Create stream\nI0817 13:06:32.855197 3119 log.go:181] (0x40006b8000) (0x40008901e0) Stream added, broadcasting: 1\nI0817 13:06:32.865738 3119 log.go:181] (0x40006b8000) Reply frame received for 1\nI0817 13:06:32.866278 3119 log.go:181] (0x40006b8000) (0x4000890320) Create stream\nI0817 13:06:32.866332 3119 log.go:181] (0x40006b8000) (0x4000890320) Stream added, broadcasting: 3\nI0817 13:06:32.867660 3119 log.go:181] (0x40006b8000) Reply frame received for 3\nI0817 13:06:32.867901 3119 log.go:181] (0x40006b8000) (0x40009aea00) Create stream\nI0817 13:06:32.867957 3119 log.go:181] (0x40006b8000) (0x40009aea00) Stream added, broadcasting: 5\nI0817 13:06:32.869098 3119 log.go:181] (0x40006b8000) Reply frame received for 5\nI0817 13:06:32.935830 3119 log.go:181] (0x40006b8000) Data frame received for 5\nI0817 13:06:32.936075 3119 log.go:181] (0x40009aea00) (5) Data frame handling\nI0817 13:06:32.936817 3119 log.go:181] (0x40009aea00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 13:06:32.960973 3119 log.go:181] (0x40006b8000) Data frame received for 3\nI0817 13:06:32.961043 3119 log.go:181] (0x4000890320) (3) Data frame handling\nI0817 13:06:32.961220 3119 log.go:181] (0x40006b8000) Data frame received for 5\nI0817 13:06:32.961339 3119 log.go:181] (0x40009aea00) (5) Data frame handling\nI0817 13:06:32.961526 3119 log.go:181] (0x4000890320) (3) Data frame sent\nI0817 13:06:32.961766 3119 log.go:181] (0x40006b8000) Data frame received for 3\nI0817 13:06:32.961928 3119 log.go:181] (0x4000890320) (3) Data frame handling\nI0817 13:06:32.962518 3119 log.go:181] (0x40006b8000) Data frame received for 1\nI0817 13:06:32.962621 3119 log.go:181] (0x40008901e0) (1) Data frame handling\nI0817 13:06:32.962727 3119 log.go:181] (0x40008901e0) (1) Data frame sent\nI0817 13:06:32.965336 3119 log.go:181] (0x40006b8000) (0x40008901e0) Stream removed, broadcasting: 1\nI0817 13:06:32.968313 3119 log.go:181] (0x40006b8000) Go away received\nI0817 13:06:32.973625 3119 log.go:181] (0x40006b8000) (0x40008901e0) Stream removed, broadcasting: 1\nI0817 13:06:32.973999 3119 log.go:181] (0x40006b8000) (0x4000890320) Stream removed, broadcasting: 3\nI0817 13:06:32.974254 3119 log.go:181] (0x40006b8000) (0x40009aea00) Stream removed, broadcasting: 5\n" Aug 17 13:06:32.986: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 13:06:32.986: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 13:06:32.992: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 17 13:06:43.000: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 17 13:06:43.000: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 13:06:43.288: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999968833s Aug 17 13:06:44.295: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.729168333s Aug 17 13:06:45.303: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.72151453s Aug 17 13:06:46.312: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.713341632s Aug 17 13:06:47.320: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.704666825s Aug 17 13:06:48.329: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.696590896s Aug 17 13:06:49.338: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.687696684s Aug 17 13:06:50.359: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.678313387s Aug 17 13:06:51.367: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.657927127s Aug 17 13:06:52.374: INFO: Verifying statefulset ss doesn't scale past 1 for another 649.949046ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7167 Aug 17 13:06:53.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:06:54.976: INFO: stderr: "I0817 13:06:54.840273 3139 log.go:181] (0x400003a6e0) (0x4000e123c0) Create stream\nI0817 13:06:54.844790 3139 log.go:181] (0x400003a6e0) (0x4000e123c0) Stream added, broadcasting: 1\nI0817 13:06:54.862178 3139 log.go:181] (0x400003a6e0) Reply frame received for 1\nI0817 13:06:54.863530 3139 log.go:181] (0x400003a6e0) (0x4000e12460) Create stream\nI0817 13:06:54.863649 3139 log.go:181] (0x400003a6e0) (0x4000e12460) Stream added, broadcasting: 3\nI0817 13:06:54.865367 3139 log.go:181] (0x400003a6e0) Reply frame received for 3\nI0817 13:06:54.865716 3139 log.go:181] (0x400003a6e0) (0x4000da20a0) Create stream\nI0817 13:06:54.865797 3139 log.go:181] (0x400003a6e0) (0x4000da20a0) Stream added, broadcasting: 5\nI0817 13:06:54.867170 3139 log.go:181] (0x400003a6e0) Reply frame received for 5\nI0817 13:06:54.954460 3139 log.go:181] (0x400003a6e0) Data frame received for 5\nI0817 13:06:54.955243 3139 log.go:181] (0x400003a6e0) Data frame received for 1\nI0817 13:06:54.955429 3139 log.go:181] (0x4000e123c0) (1) Data frame handling\nI0817 13:06:54.955532 3139 log.go:181] (0x400003a6e0) Data frame received for 3\nI0817 13:06:54.955634 3139 log.go:181] (0x4000e12460) (3) Data frame handling\nI0817 13:06:54.955850 3139 log.go:181] (0x4000da20a0) (5) Data frame handling\nI0817 13:06:54.957414 3139 log.go:181] (0x4000e12460) (3) Data frame sent\nI0817 13:06:54.957921 3139 log.go:181] (0x4000da20a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 13:06:54.958544 3139 log.go:181] (0x400003a6e0) Data frame received for 5\nI0817 13:06:54.958730 3139 log.go:181] (0x4000da20a0) (5) Data frame handling\nI0817 13:06:54.958944 3139 log.go:181] (0x400003a6e0) Data frame received for 3\nI0817 13:06:54.959056 3139 log.go:181] (0x4000e12460) (3) Data frame handling\nI0817 13:06:54.959682 3139 log.go:181] (0x4000e123c0) (1) Data frame sent\nI0817 13:06:54.960705 3139 log.go:181] (0x400003a6e0) (0x4000e123c0) Stream removed, broadcasting: 1\nI0817 13:06:54.964574 3139 log.go:181] (0x400003a6e0) Go away received\nI0817 13:06:54.967218 3139 log.go:181] (0x400003a6e0) (0x4000e123c0) Stream removed, broadcasting: 1\nI0817 13:06:54.967588 3139 log.go:181] (0x400003a6e0) (0x4000e12460) Stream removed, broadcasting: 3\nI0817 13:06:54.968070 3139 log.go:181] (0x400003a6e0) (0x4000da20a0) Stream removed, broadcasting: 5\n" Aug 17 13:06:54.978: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 13:06:54.978: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 13:06:54.985: INFO: Found 1 stateful pods, waiting for 3 Aug 17 13:07:04.998: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 13:07:04.998: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 13:07:04.998: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 17 13:07:05.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 13:07:06.692: INFO: stderr: "I0817 13:07:06.584310 3159 log.go:181] (0x400003b3f0) (0x4000154500) Create stream\nI0817 13:07:06.590509 3159 log.go:181] (0x400003b3f0) (0x4000154500) Stream added, broadcasting: 1\nI0817 13:07:06.603218 3159 log.go:181] (0x400003b3f0) Reply frame received for 1\nI0817 13:07:06.604013 3159 log.go:181] (0x400003b3f0) (0x4000712280) Create stream\nI0817 13:07:06.604101 3159 log.go:181] (0x400003b3f0) (0x4000712280) Stream added, broadcasting: 3\nI0817 13:07:06.605904 3159 log.go:181] (0x400003b3f0) Reply frame received for 3\nI0817 13:07:06.606169 3159 log.go:181] (0x400003b3f0) (0x4000712320) Create stream\nI0817 13:07:06.606231 3159 log.go:181] (0x400003b3f0) (0x4000712320) Stream added, broadcasting: 5\nI0817 13:07:06.607744 3159 log.go:181] (0x400003b3f0) Reply frame received for 5\nI0817 13:07:06.661693 3159 log.go:181] (0x400003b3f0) Data frame received for 5\nI0817 13:07:06.662080 3159 log.go:181] (0x4000712320) (5) Data frame handling\nI0817 13:07:06.663065 3159 log.go:181] (0x400003b3f0) Data frame received for 3\nI0817 13:07:06.663185 3159 log.go:181] (0x4000712280) (3) Data frame handling\nI0817 13:07:06.663313 3159 log.go:181] (0x4000712280) (3) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 13:07:06.664021 3159 log.go:181] (0x4000712320) (5) Data frame sent\nI0817 13:07:06.664149 3159 log.go:181] (0x400003b3f0) Data frame received for 5\nI0817 13:07:06.664261 3159 log.go:181] (0x400003b3f0) Data frame received for 3\nI0817 13:07:06.664405 3159 log.go:181] (0x4000712280) (3) Data frame handling\nI0817 13:07:06.664502 3159 log.go:181] (0x4000712320) (5) Data frame handling\nI0817 13:07:06.665331 3159 log.go:181] (0x400003b3f0) Data frame received for 1\nI0817 13:07:06.665423 3159 log.go:181] (0x4000154500) (1) Data frame handling\nI0817 13:07:06.665523 3159 log.go:181] (0x4000154500) (1) Data frame sent\nI0817 13:07:06.666926 3159 log.go:181] (0x400003b3f0) (0x4000154500) Stream removed, broadcasting: 1\nI0817 13:07:06.668863 3159 log.go:181] (0x400003b3f0) Go away received\nI0817 13:07:06.683173 3159 log.go:181] (0x400003b3f0) (0x4000154500) Stream removed, broadcasting: 1\nI0817 13:07:06.683433 3159 log.go:181] (0x400003b3f0) (0x4000712280) Stream removed, broadcasting: 3\nI0817 13:07:06.683643 3159 log.go:181] (0x400003b3f0) (0x4000712320) Stream removed, broadcasting: 5\n" Aug 17 13:07:06.693: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 13:07:06.693: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 13:07:06.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 13:07:08.301: INFO: stderr: "I0817 13:07:08.155358 3179 log.go:181] (0x40001d0000) (0x4000e82000) Create stream\nI0817 13:07:08.158457 3179 log.go:181] (0x40001d0000) (0x4000e82000) Stream added, broadcasting: 1\nI0817 13:07:08.176001 3179 log.go:181] (0x40001d0000) Reply frame received for 1\nI0817 13:07:08.176946 3179 log.go:181] (0x40001d0000) (0x4000c93180) Create stream\nI0817 13:07:08.177030 3179 log.go:181] (0x40001d0000) (0x4000c93180) Stream added, broadcasting: 3\nI0817 13:07:08.178157 3179 log.go:181] (0x40001d0000) Reply frame received for 3\nI0817 13:07:08.178376 3179 log.go:181] (0x40001d0000) (0x4000e08000) Create stream\nI0817 13:07:08.178430 3179 log.go:181] (0x40001d0000) (0x4000e08000) Stream added, broadcasting: 5\nI0817 13:07:08.179519 3179 log.go:181] (0x40001d0000) Reply frame received for 5\nI0817 13:07:08.229239 3179 log.go:181] (0x40001d0000) Data frame received for 5\nI0817 13:07:08.229714 3179 log.go:181] (0x4000e08000) (5) Data frame handling\nI0817 13:07:08.230795 3179 log.go:181] (0x4000e08000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 13:07:08.279007 3179 log.go:181] (0x40001d0000) Data frame received for 3\nI0817 13:07:08.279243 3179 log.go:181] (0x4000c93180) (3) Data frame handling\nI0817 13:07:08.279543 3179 log.go:181] (0x40001d0000) Data frame received for 5\nI0817 13:07:08.279694 3179 log.go:181] (0x4000e08000) (5) Data frame handling\nI0817 13:07:08.279851 3179 log.go:181] (0x4000c93180) (3) Data frame sent\nI0817 13:07:08.280031 3179 log.go:181] (0x40001d0000) Data frame received for 3\nI0817 13:07:08.280189 3179 log.go:181] (0x4000c93180) (3) Data frame handling\nI0817 13:07:08.280883 3179 log.go:181] (0x40001d0000) Data frame received for 1\nI0817 13:07:08.281049 3179 log.go:181] (0x4000e82000) (1) Data frame handling\nI0817 13:07:08.281250 3179 log.go:181] (0x4000e82000) (1) Data frame sent\nI0817 13:07:08.283291 3179 log.go:181] (0x40001d0000) (0x4000e82000) Stream removed, broadcasting: 1\nI0817 13:07:08.287397 3179 log.go:181] (0x40001d0000) Go away received\nI0817 13:07:08.290422 3179 log.go:181] (0x40001d0000) (0x4000e82000) Stream removed, broadcasting: 1\nI0817 13:07:08.291526 3179 log.go:181] (0x40001d0000) (0x4000c93180) Stream removed, broadcasting: 3\nI0817 13:07:08.292177 3179 log.go:181] (0x40001d0000) (0x4000e08000) Stream removed, broadcasting: 5\n" Aug 17 13:07:08.303: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 13:07:08.303: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 13:07:08.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 13:07:10.417: INFO: stderr: "I0817 13:07:09.952125 3199 log.go:181] (0x4000e05a20) (0x4000724820) Create stream\nI0817 13:07:09.956560 3199 log.go:181] (0x4000e05a20) (0x4000724820) Stream added, broadcasting: 1\nI0817 13:07:09.975102 3199 log.go:181] (0x4000e05a20) Reply frame received for 1\nI0817 13:07:09.976063 3199 log.go:181] (0x4000e05a20) (0x4000724000) Create stream\nI0817 13:07:09.976170 3199 log.go:181] (0x4000e05a20) (0x4000724000) Stream added, broadcasting: 3\nI0817 13:07:09.977609 3199 log.go:181] (0x4000e05a20) Reply frame received for 3\nI0817 13:07:09.977832 3199 log.go:181] (0x4000e05a20) (0x4000d08000) Create stream\nI0817 13:07:09.977886 3199 log.go:181] (0x4000e05a20) (0x4000d08000) Stream added, broadcasting: 5\nI0817 13:07:09.978950 3199 log.go:181] (0x4000e05a20) Reply frame received for 5\nI0817 13:07:10.037978 3199 log.go:181] (0x4000e05a20) Data frame received for 5\nI0817 13:07:10.038380 3199 log.go:181] (0x4000d08000) (5) Data frame handling\nI0817 13:07:10.039166 3199 log.go:181] (0x4000d08000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 13:07:10.395878 3199 log.go:181] (0x4000e05a20) Data frame received for 3\nI0817 13:07:10.396098 3199 log.go:181] (0x4000724000) (3) Data frame handling\nI0817 13:07:10.396311 3199 log.go:181] (0x4000e05a20) Data frame received for 5\nI0817 13:07:10.396579 3199 log.go:181] (0x4000d08000) (5) Data frame handling\nI0817 13:07:10.396858 3199 log.go:181] (0x4000724000) (3) Data frame sent\nI0817 13:07:10.396950 3199 log.go:181] (0x4000e05a20) Data frame received for 3\nI0817 13:07:10.397013 3199 log.go:181] (0x4000724000) (3) Data frame handling\nI0817 13:07:10.398119 3199 log.go:181] (0x4000e05a20) Data frame received for 1\nI0817 13:07:10.398208 3199 log.go:181] (0x4000724820) (1) Data frame handling\nI0817 13:07:10.398288 3199 log.go:181] (0x4000724820) (1) Data frame sent\nI0817 13:07:10.399893 3199 log.go:181] (0x4000e05a20) (0x4000724820) Stream removed, broadcasting: 1\nI0817 13:07:10.403295 3199 log.go:181] (0x4000e05a20) Go away received\nI0817 13:07:10.405598 3199 log.go:181] (0x4000e05a20) (0x4000724820) Stream removed, broadcasting: 1\nI0817 13:07:10.406012 3199 log.go:181] (0x4000e05a20) (0x4000724000) Stream removed, broadcasting: 3\nI0817 13:07:10.406281 3199 log.go:181] (0x4000e05a20) (0x4000d08000) Stream removed, broadcasting: 5\n" Aug 17 13:07:10.417: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 13:07:10.417: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 13:07:10.418: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 13:07:10.422: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 17 13:07:21.157: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 17 13:07:21.157: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 17 13:07:21.157: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 17 13:07:21.479: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999995585s Aug 17 13:07:22.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.903183387s Aug 17 13:07:24.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.401522063s Aug 17 13:07:25.658: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.768482306s Aug 17 13:07:27.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.724610815s Aug 17 13:07:28.637: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.783844335s Aug 17 13:07:29.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.745712474s Aug 17 13:07:30.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 736.645816ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7167 Aug 17 13:07:31.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:07:33.308: INFO: stderr: "I0817 13:07:33.178798 3219 log.go:181] (0x4000d0e160) (0x40007b2280) Create stream\nI0817 13:07:33.185123 3219 log.go:181] (0x4000d0e160) (0x40007b2280) Stream added, broadcasting: 1\nI0817 13:07:33.206384 3219 log.go:181] (0x4000d0e160) Reply frame received for 1\nI0817 13:07:33.207379 3219 log.go:181] (0x4000d0e160) (0x4000d98000) Create stream\nI0817 13:07:33.207493 3219 log.go:181] (0x4000d0e160) (0x4000d98000) Stream added, broadcasting: 3\nI0817 13:07:33.209059 3219 log.go:181] (0x4000d0e160) Reply frame received for 3\nI0817 13:07:33.209310 3219 log.go:181] (0x4000d0e160) (0x4000d980a0) Create stream\nI0817 13:07:33.209372 3219 log.go:181] (0x4000d0e160) (0x4000d980a0) Stream added, broadcasting: 5\nI0817 13:07:33.210627 3219 log.go:181] (0x4000d0e160) Reply frame received for 5\nI0817 13:07:33.286248 3219 log.go:181] (0x4000d0e160) Data frame received for 3\nI0817 13:07:33.286506 3219 log.go:181] (0x4000d0e160) Data frame received for 1\nI0817 13:07:33.286945 3219 log.go:181] (0x4000d98000) (3) Data frame handling\nI0817 13:07:33.287512 3219 log.go:181] (0x40007b2280) (1) Data frame handling\nI0817 13:07:33.288239 3219 log.go:181] (0x4000d98000) (3) Data frame sent\nI0817 13:07:33.288377 3219 log.go:181] (0x40007b2280) (1) Data frame sent\nI0817 13:07:33.289131 3219 log.go:181] (0x4000d0e160) Data frame received for 3\nI0817 13:07:33.289235 3219 log.go:181] (0x4000d98000) (3) Data frame handling\nI0817 13:07:33.289941 3219 log.go:181] (0x4000d0e160) Data frame received for 5\nI0817 13:07:33.290086 3219 log.go:181] (0x4000d0e160) (0x40007b2280) Stream removed, broadcasting: 1\nI0817 13:07:33.292457 3219 log.go:181] (0x4000d980a0) (5) Data frame handling\nI0817 13:07:33.292571 3219 log.go:181] (0x4000d980a0) (5) Data frame sent\nI0817 13:07:33.292653 3219 log.go:181] (0x4000d0e160) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 13:07:33.292719 3219 log.go:181] (0x4000d980a0) (5) Data frame handling\nI0817 13:07:33.295107 3219 log.go:181] (0x4000d0e160) Go away received\nI0817 13:07:33.298080 3219 log.go:181] (0x4000d0e160) (0x40007b2280) Stream removed, broadcasting: 1\nI0817 13:07:33.298662 3219 log.go:181] (0x4000d0e160) (0x4000d98000) Stream removed, broadcasting: 3\nI0817 13:07:33.298934 3219 log.go:181] (0x4000d0e160) (0x4000d980a0) Stream removed, broadcasting: 5\n" Aug 17 13:07:33.309: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 13:07:33.310: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 13:07:33.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:07:35.093: INFO: stderr: "I0817 13:07:34.958317 3240 log.go:181] (0x40009aa840) (0x4000148c80) Create stream\nI0817 13:07:34.961673 3240 log.go:181] (0x40009aa840) (0x4000148c80) Stream added, broadcasting: 1\nI0817 13:07:34.977738 3240 log.go:181] (0x40009aa840) Reply frame received for 1\nI0817 13:07:34.978348 3240 log.go:181] (0x40009aa840) (0x400038e000) Create stream\nI0817 13:07:34.978417 3240 log.go:181] (0x40009aa840) (0x400038e000) Stream added, broadcasting: 3\nI0817 13:07:34.980269 3240 log.go:181] (0x40009aa840) Reply frame received for 3\nI0817 13:07:34.980859 3240 log.go:181] (0x40009aa840) (0x400038e140) Create stream\nI0817 13:07:34.980972 3240 log.go:181] (0x40009aa840) (0x400038e140) Stream added, broadcasting: 5\nI0817 13:07:34.982332 3240 log.go:181] (0x40009aa840) Reply frame received for 5\nI0817 13:07:35.069911 3240 log.go:181] (0x40009aa840) Data frame received for 3\nI0817 13:07:35.070755 3240 log.go:181] (0x40009aa840) Data frame received for 5\nI0817 13:07:35.071003 3240 log.go:181] (0x400038e000) (3) Data frame handling\nI0817 13:07:35.071266 3240 log.go:181] (0x400038e140) (5) Data frame handling\nI0817 13:07:35.071608 3240 log.go:181] (0x40009aa840) Data frame received for 1\nI0817 13:07:35.071758 3240 log.go:181] (0x4000148c80) (1) Data frame handling\nI0817 13:07:35.072622 3240 log.go:181] (0x400038e140) (5) Data frame sent\nI0817 13:07:35.072872 3240 log.go:181] (0x4000148c80) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 13:07:35.073364 3240 log.go:181] (0x40009aa840) Data frame received for 5\nI0817 13:07:35.073496 3240 log.go:181] (0x400038e140) (5) Data frame handling\nI0817 13:07:35.073670 3240 log.go:181] (0x400038e000) (3) Data frame sent\nI0817 13:07:35.073774 3240 log.go:181] (0x40009aa840) Data frame received for 3\nI0817 13:07:35.074546 3240 log.go:181] (0x40009aa840) (0x4000148c80) Stream removed, broadcasting: 1\nI0817 13:07:35.076090 3240 log.go:181] (0x400038e000) (3) Data frame handling\nI0817 13:07:35.079350 3240 log.go:181] (0x40009aa840) Go away received\nI0817 13:07:35.082576 3240 log.go:181] (0x40009aa840) (0x4000148c80) Stream removed, broadcasting: 1\nI0817 13:07:35.083264 3240 log.go:181] (0x40009aa840) (0x400038e000) Stream removed, broadcasting: 3\nI0817 13:07:35.083515 3240 log.go:181] (0x40009aa840) (0x400038e140) Stream removed, broadcasting: 5\n" Aug 17 13:07:35.094: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 13:07:35.094: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 13:07:35.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:07:39.982: INFO: rc: 1 Aug 17 13:07:39.983: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: I0817 13:07:36.912474 3260 log.go:181] (0x400012a0b0) (0x4000b581e0) Create stream I0817 13:07:36.917371 3260 log.go:181] (0x400012a0b0) (0x4000b581e0) Stream added, broadcasting: 1 I0817 13:07:36.933312 3260 log.go:181] (0x400012a0b0) Reply frame received for 1 I0817 13:07:36.934407 3260 log.go:181] (0x400012a0b0) (0x4000848280) Create stream I0817 13:07:36.934503 3260 log.go:181] (0x400012a0b0) (0x4000848280) Stream added, broadcasting: 3 I0817 13:07:36.935940 3260 log.go:181] (0x400012a0b0) Reply frame received for 3 I0817 13:07:36.936271 3260 log.go:181] (0x400012a0b0) (0x4000e98000) Create stream I0817 13:07:36.936350 3260 log.go:181] (0x400012a0b0) (0x4000e98000) Stream added, broadcasting: 5 I0817 13:07:36.937740 3260 log.go:181] (0x400012a0b0) Reply frame received for 5 I0817 13:07:39.959045 3260 log.go:181] (0x400012a0b0) Data frame received for 1 I0817 13:07:39.959773 3260 log.go:181] (0x4000b581e0) (1) Data frame handling I0817 13:07:39.961757 3260 log.go:181] (0x400012a0b0) (0x4000848280) Stream removed, broadcasting: 3 I0817 13:07:39.963560 3260 log.go:181] (0x400012a0b0) (0x4000e98000) Stream removed, broadcasting: 5 I0817 13:07:39.964202 3260 log.go:181] (0x4000b581e0) (1) Data frame sent I0817 13:07:39.965705 3260 log.go:181] (0x400012a0b0) (0x4000b581e0) Stream removed, broadcasting: 1 I0817 13:07:39.965961 3260 log.go:181] (0x400012a0b0) Go away received I0817 13:07:39.969991 3260 log.go:181] (0x400012a0b0) (0x4000b581e0) Stream removed, broadcasting: 1 I0817 13:07:39.970266 3260 log.go:181] (0x400012a0b0) (0x4000848280) Stream removed, broadcasting: 3 I0817 13:07:39.970329 3260 log.go:181] (0x400012a0b0) (0x4000e98000) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "13d40ecb1c98c3becba832ae77853d178ffa43a71ab8ee5e74ff03e5798e8b7f": cannot exec in a deleted state: unknown error: exit status 1 Aug 17 13:07:49.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:07:51.330: INFO: rc: 1 Aug 17 13:07:51.330: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:08:01.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:08:02.717: INFO: rc: 1 Aug 17 13:08:02.718: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:08:12.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:08:14.275: INFO: rc: 1 Aug 17 13:08:14.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:08:24.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:08:25.615: INFO: rc: 1 Aug 17 13:08:25.616: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:08:35.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:08:36.988: INFO: rc: 1 Aug 17 13:08:36.989: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:08:46.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:08:48.337: INFO: rc: 1 Aug 17 13:08:48.337: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:08:58.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:08:59.712: INFO: rc: 1 Aug 17 13:08:59.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:09:09.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:09:11.063: INFO: rc: 1 Aug 17 13:09:11.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:09:21.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:09:22.396: INFO: rc: 1 Aug 17 13:09:22.396: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:09:32.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:09:33.681: INFO: rc: 1 Aug 17 13:09:33.681: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:09:43.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:09:45.043: INFO: rc: 1 Aug 17 13:09:45.044: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:09:55.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:09:56.383: INFO: rc: 1 Aug 17 13:09:56.383: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:10:06.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:10:07.746: INFO: rc: 1 Aug 17 13:10:07.746: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:10:17.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:10:19.269: INFO: rc: 1 Aug 17 13:10:19.269: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:10:29.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:10:30.605: INFO: rc: 1 Aug 17 13:10:30.606: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:10:40.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:10:42.072: INFO: rc: 1 Aug 17 13:10:42.073: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:10:52.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:10:53.485: INFO: rc: 1 Aug 17 13:10:53.485: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:11:03.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:11:04.970: INFO: rc: 1 Aug 17 13:11:04.971: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:11:14.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:11:16.355: INFO: rc: 1 Aug 17 13:11:16.355: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:11:26.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:11:28.105: INFO: rc: 1 Aug 17 13:11:28.105: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:11:38.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:11:39.513: INFO: rc: 1 Aug 17 13:11:39.513: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:11:49.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:11:50.885: INFO: rc: 1 Aug 17 13:11:50.886: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:12:00.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:12:02.285: INFO: rc: 1 Aug 17 13:12:02.285: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:12:12.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:12:14.711: INFO: rc: 1 Aug 17 13:12:14.712: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:12:24.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:12:26.170: INFO: rc: 1 Aug 17 13:12:26.170: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 17 13:12:36.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 13:12:37.623: INFO: rc: 1 Aug 17 13:12:37.623: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Aug 17 13:12:37.623: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 17 13:12:37.636: INFO: Deleting all statefulset in ns statefulset-7167 Aug 17 13:12:37.640: INFO: Scaling statefulset ss to 0 Aug 17 13:12:37.649: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 13:12:37.652: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:12:37.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7167" for this suite. • [SLOW TEST:376.927 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":261,"skipped":4387,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:12:37.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 17 13:12:38.455: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6420 /api/v1/namespaces/watch-6420/configmaps/e2e-watch-test-watch-closed 424ba5de-507f-432b-a8a2-42f8e2c4ea8b 734546 0 2020-08-17 13:12:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-17 13:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 13:12:38.456: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6420 /api/v1/namespaces/watch-6420/configmaps/e2e-watch-test-watch-closed 424ba5de-507f-432b-a8a2-42f8e2c4ea8b 734547 0 2020-08-17 13:12:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-17 13:12:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 17 13:12:38.695: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6420 /api/v1/namespaces/watch-6420/configmaps/e2e-watch-test-watch-closed 424ba5de-507f-432b-a8a2-42f8e2c4ea8b 734548 0 2020-08-17 13:12:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-17 13:12:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 17 13:12:38.696: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6420 /api/v1/namespaces/watch-6420/configmaps/e2e-watch-test-watch-closed 424ba5de-507f-432b-a8a2-42f8e2c4ea8b 734549 0 2020-08-17 13:12:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-17 13:12:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:12:38.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6420" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":262,"skipped":4394,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:12:38.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 17 13:12:38.874: INFO: Waiting up to 5m0s for pod "pod-bd864534-e967-49d3-826b-ca2c11b08bde" in namespace "emptydir-2086" to be "Succeeded or Failed" Aug 17 13:12:38.980: INFO: Pod "pod-bd864534-e967-49d3-826b-ca2c11b08bde": Phase="Pending", Reason="", readiness=false. Elapsed: 106.009325ms Aug 17 13:12:40.986: INFO: Pod "pod-bd864534-e967-49d3-826b-ca2c11b08bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112065616s Aug 17 13:12:43.037: INFO: Pod "pod-bd864534-e967-49d3-826b-ca2c11b08bde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162863903s Aug 17 13:12:45.044: INFO: Pod "pod-bd864534-e967-49d3-826b-ca2c11b08bde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169388736s STEP: Saw pod success Aug 17 13:12:45.044: INFO: Pod "pod-bd864534-e967-49d3-826b-ca2c11b08bde" satisfied condition "Succeeded or Failed" Aug 17 13:12:45.048: INFO: Trying to get logs from node latest-worker pod pod-bd864534-e967-49d3-826b-ca2c11b08bde container test-container: STEP: delete the pod Aug 17 13:12:45.196: INFO: Waiting for pod pod-bd864534-e967-49d3-826b-ca2c11b08bde to disappear Aug 17 13:12:45.429: INFO: Pod pod-bd864534-e967-49d3-826b-ca2c11b08bde no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:12:45.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2086" for this suite. • [SLOW TEST:6.736 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4400,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:12:45.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 17 13:12:45.766: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Aug 17 13:12:45.772: INFO: starting watch STEP: patching STEP: updating Aug 17 13:12:45.960: INFO: waiting for watch events with expected annotations Aug 17 13:12:45.961: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:12:46.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-5659" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":264,"skipped":4432,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:12:46.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 13:12:46.379: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2452cb99-5274-4386-a39f-6425383caf25", Controller:(*bool)(0x4005c6aa8a), BlockOwnerDeletion:(*bool)(0x4005c6aa8b)}} Aug 17 13:12:46.413: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"bd85472f-a145-419e-89a9-417b14c4d453", Controller:(*bool)(0x4005d3ccba), BlockOwnerDeletion:(*bool)(0x4005d3ccbb)}} Aug 17 13:12:46.441: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"fe8b0e0b-0a6e-4a9b-be69-55c33edf095a", Controller:(*bool)(0x4005c6ac82), BlockOwnerDeletion:(*bool)(0x4005c6ac83)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:12:51.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6791" for this suite. • [SLOW TEST:5.363 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":265,"skipped":4439,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:12:51.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 13:12:52.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfc54813-0790-40fa-9744-35d7f13a957d" in namespace "projected-5711" to be "Succeeded or Failed" Aug 17 13:12:52.427: INFO: Pod "downwardapi-volume-dfc54813-0790-40fa-9744-35d7f13a957d": Phase="Pending", Reason="", readiness=false. Elapsed: 259.048991ms Aug 17 13:12:54.617: INFO: Pod "downwardapi-volume-dfc54813-0790-40fa-9744-35d7f13a957d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.448803991s Aug 17 13:12:56.627: INFO: Pod "downwardapi-volume-dfc54813-0790-40fa-9744-35d7f13a957d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.459192012s Aug 17 13:12:58.635: INFO: Pod "downwardapi-volume-dfc54813-0790-40fa-9744-35d7f13a957d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.466669639s STEP: Saw pod success Aug 17 13:12:58.635: INFO: Pod "downwardapi-volume-dfc54813-0790-40fa-9744-35d7f13a957d" satisfied condition "Succeeded or Failed" Aug 17 13:12:58.640: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-dfc54813-0790-40fa-9744-35d7f13a957d container client-container: STEP: delete the pod Aug 17 13:12:58.879: INFO: Waiting for pod downwardapi-volume-dfc54813-0790-40fa-9744-35d7f13a957d to disappear Aug 17 13:12:58.898: INFO: Pod downwardapi-volume-dfc54813-0790-40fa-9744-35d7f13a957d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:12:58.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5711" for this suite. • [SLOW TEST:7.359 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4445,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:12:58.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Aug 17 13:13:05.702: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9522 pod-service-account-50292a6a-f43a-4a20-b04f-bd251b1721ec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 17 13:13:11.383: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9522 pod-service-account-50292a6a-f43a-4a20-b04f-bd251b1721ec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 17 13:13:13.031: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9522 pod-service-account-50292a6a-f43a-4a20-b04f-bd251b1721ec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:13:14.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9522" for this suite. • [SLOW TEST:15.753 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":267,"skipped":4454,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:13:14.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5073 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5073;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5073 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5073;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5073.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5073.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5073.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5073.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5073.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5073.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5073.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5073.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5073.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5073.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5073.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 30.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.30_udp@PTR;check="$$(dig +tcp +noall +answer +search 30.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.30_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5073 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5073;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5073 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5073;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5073.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5073.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5073.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5073.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5073.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5073.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5073.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5073.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5073.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5073.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5073.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5073.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 30.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.30_udp@PTR;check="$$(dig +tcp +noall +answer +search 30.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.30_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 13:13:30.688: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:30.693: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:30.697: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:30.701: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:30.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:30.720: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:30.751: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:30.988: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:31.566: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:31.570: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:31.573: INFO: Unable to read jessie_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:31.576: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:31.579: INFO: Unable to read jessie_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:31.582: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:31.586: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:31.589: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:31.612: INFO: Lookups using dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5073 wheezy_tcp@dns-test-service.dns-5073 wheezy_udp@dns-test-service.dns-5073.svc wheezy_tcp@dns-test-service.dns-5073.svc wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5073 jessie_tcp@dns-test-service.dns-5073 jessie_udp@dns-test-service.dns-5073.svc jessie_tcp@dns-test-service.dns-5073.svc jessie_udp@_http._tcp.dns-test-service.dns-5073.svc jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc] Aug 17 13:13:36.620: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.624: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.627: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.631: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.635: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.638: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.642: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.646: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.674: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.677: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.679: INFO: Unable to read jessie_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.682: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.685: INFO: Unable to read jessie_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.691: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:36.717: INFO: Lookups using dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5073 wheezy_tcp@dns-test-service.dns-5073 wheezy_udp@dns-test-service.dns-5073.svc wheezy_tcp@dns-test-service.dns-5073.svc wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5073 jessie_tcp@dns-test-service.dns-5073 jessie_udp@dns-test-service.dns-5073.svc jessie_tcp@dns-test-service.dns-5073.svc jessie_udp@_http._tcp.dns-test-service.dns-5073.svc jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc] Aug 17 13:13:41.619: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.622: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.626: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.631: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.635: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.642: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.645: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.667: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.670: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.673: INFO: Unable to read jessie_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.676: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.679: INFO: Unable to read jessie_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.682: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.686: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.690: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:41.717: INFO: Lookups using dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5073 wheezy_tcp@dns-test-service.dns-5073 wheezy_udp@dns-test-service.dns-5073.svc wheezy_tcp@dns-test-service.dns-5073.svc wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5073 jessie_tcp@dns-test-service.dns-5073 jessie_udp@dns-test-service.dns-5073.svc jessie_tcp@dns-test-service.dns-5073.svc jessie_udp@_http._tcp.dns-test-service.dns-5073.svc jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc] Aug 17 13:13:46.618: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.623: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.627: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.632: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.643: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.647: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.703: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.707: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.711: INFO: Unable to read jessie_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.714: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.719: INFO: Unable to read jessie_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.723: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.726: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.729: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:46.746: INFO: Lookups using dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5073 wheezy_tcp@dns-test-service.dns-5073 wheezy_udp@dns-test-service.dns-5073.svc wheezy_tcp@dns-test-service.dns-5073.svc wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5073 jessie_tcp@dns-test-service.dns-5073 jessie_udp@dns-test-service.dns-5073.svc jessie_tcp@dns-test-service.dns-5073.svc jessie_udp@_http._tcp.dns-test-service.dns-5073.svc jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc] Aug 17 13:13:51.619: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.626: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.630: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.634: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.642: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.646: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.669: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.673: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.677: INFO: Unable to read jessie_udp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.682: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073 from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.686: INFO: Unable to read jessie_udp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.690: INFO: Unable to read jessie_tcp@dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.694: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.698: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc from pod dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9: the server could not find the requested resource (get pods dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9) Aug 17 13:13:51.726: INFO: Lookups using dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5073 wheezy_tcp@dns-test-service.dns-5073 wheezy_udp@dns-test-service.dns-5073.svc wheezy_tcp@dns-test-service.dns-5073.svc wheezy_udp@_http._tcp.dns-test-service.dns-5073.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5073.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5073 jessie_tcp@dns-test-service.dns-5073 jessie_udp@dns-test-service.dns-5073.svc jessie_tcp@dns-test-service.dns-5073.svc jessie_udp@_http._tcp.dns-test-service.dns-5073.svc jessie_tcp@_http._tcp.dns-test-service.dns-5073.svc] Aug 17 13:13:56.836: INFO: DNS probes using dns-5073/dns-test-f7a12bf0-34b7-474e-b3b8-e4d4c205a3c9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:13:57.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5073" for this suite. • [SLOW TEST:43.036 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":268,"skipped":4459,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:13:57.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 17 13:14:00.625: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 17 13:14:02.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266840, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266840, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266841, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266840, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:14:04.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266840, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266840, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266841, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266840, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:14:06.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266840, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266840, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266841, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733266840, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 13:14:09.740: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 13:14:09.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:14:12.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2650" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:14.815 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":269,"skipped":4459,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:14:12.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 17 13:14:12.705: INFO: Waiting up to 5m0s for pod "downward-api-984a4945-1c79-423d-bf7b-c5f061e1ed89" in namespace "downward-api-9953" to be "Succeeded or Failed" Aug 17 13:14:12.730: INFO: Pod "downward-api-984a4945-1c79-423d-bf7b-c5f061e1ed89": Phase="Pending", Reason="", readiness=false. Elapsed: 23.96333ms Aug 17 13:14:14.787: INFO: Pod "downward-api-984a4945-1c79-423d-bf7b-c5f061e1ed89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081309828s Aug 17 13:14:16.793: INFO: Pod "downward-api-984a4945-1c79-423d-bf7b-c5f061e1ed89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087608788s Aug 17 13:14:18.800: INFO: Pod "downward-api-984a4945-1c79-423d-bf7b-c5f061e1ed89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093950002s STEP: Saw pod success Aug 17 13:14:18.800: INFO: Pod "downward-api-984a4945-1c79-423d-bf7b-c5f061e1ed89" satisfied condition "Succeeded or Failed" Aug 17 13:14:18.805: INFO: Trying to get logs from node latest-worker pod downward-api-984a4945-1c79-423d-bf7b-c5f061e1ed89 container dapi-container: STEP: delete the pod Aug 17 13:14:19.037: INFO: Waiting for pod downward-api-984a4945-1c79-423d-bf7b-c5f061e1ed89 to disappear Aug 17 13:14:19.047: INFO: Pod downward-api-984a4945-1c79-423d-bf7b-c5f061e1ed89 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:14:19.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9953" for this suite. • [SLOW TEST:6.569 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":270,"skipped":4462,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:14:19.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:14:19.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6103" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":271,"skipped":4477,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:14:19.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:14:23.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8523" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":272,"skipped":4493,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:14:23.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 17 13:14:29.943: INFO: &Pod{ObjectMeta:{send-events-42d7500f-87ef-4e97-9b96-49df0e392887 events-5308 /api/v1/namespaces/events-5308/pods/send-events-42d7500f-87ef-4e97-9b96-49df0e392887 c3f9a0e4-d133-4302-bdbf-2e8399fab656 735212 0 2020-08-17 13:14:23 +0000 UTC map[name:foo time:853415283] map[] [] [] [{e2e.test Update v1 2020-08-17 13:14:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-17 13:14:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.133\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rjrzw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rjrzw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rjrzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 13:14:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 13:14:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 13:14:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 13:14:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.133,StartTime:2020-08-17 13:14:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 13:14:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://7c13e5224204f14d849fb9a764309afaebb253aa3851f18b390e11bc3e4af23e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 17 13:14:31.955: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 17 13:14:33.965: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:14:33.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5308" for this suite. • [SLOW TEST:10.476 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":273,"skipped":4495,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:14:34.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-2416d409-aab9-451d-9fbe-6f5b9d4e3a60 STEP: Creating a pod to test consume secrets Aug 17 13:14:34.157: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8540a953-f396-4e9f-abb8-ca3fcccfe5a3" in namespace "projected-3799" to be "Succeeded or Failed" Aug 17 13:14:34.396: INFO: Pod "pod-projected-secrets-8540a953-f396-4e9f-abb8-ca3fcccfe5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 239.087731ms Aug 17 13:14:36.407: INFO: Pod "pod-projected-secrets-8540a953-f396-4e9f-abb8-ca3fcccfe5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249626913s Aug 17 13:14:38.415: INFO: Pod "pod-projected-secrets-8540a953-f396-4e9f-abb8-ca3fcccfe5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257836047s Aug 17 13:14:40.482: INFO: Pod "pod-projected-secrets-8540a953-f396-4e9f-abb8-ca3fcccfe5a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.324910079s STEP: Saw pod success Aug 17 13:14:40.482: INFO: Pod "pod-projected-secrets-8540a953-f396-4e9f-abb8-ca3fcccfe5a3" satisfied condition "Succeeded or Failed" Aug 17 13:14:40.666: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-8540a953-f396-4e9f-abb8-ca3fcccfe5a3 container projected-secret-volume-test: STEP: delete the pod Aug 17 13:14:40.924: INFO: Waiting for pod pod-projected-secrets-8540a953-f396-4e9f-abb8-ca3fcccfe5a3 to disappear Aug 17 13:14:41.101: INFO: Pod pod-projected-secrets-8540a953-f396-4e9f-abb8-ca3fcccfe5a3 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:14:41.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3799" for this suite. • [SLOW TEST:7.217 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":274,"skipped":4507,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:14:41.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6491 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6491 STEP: Creating statefulset with conflicting port in namespace statefulset-6491 STEP: Waiting until pod test-pod will start running in namespace statefulset-6491 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6491 Aug 17 13:14:50.925: INFO: Observed stateful pod in namespace: statefulset-6491, name: ss-0, uid: ac8ab447-a1df-4427-8d77-79ad05bd849a, status phase: Failed. Waiting for statefulset controller to delete. Aug 17 13:14:50.930: INFO: Observed stateful pod in namespace: statefulset-6491, name: ss-0, uid: ac8ab447-a1df-4427-8d77-79ad05bd849a, status phase: Failed. Waiting for statefulset controller to delete. Aug 17 13:14:50.946: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6491 STEP: Removing pod with conflicting port in namespace statefulset-6491 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6491 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 17 13:14:57.238: INFO: Deleting all statefulset in ns statefulset-6491 Aug 17 13:14:57.246: INFO: Scaling statefulset ss to 0 Aug 17 13:15:17.300: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 13:15:17.304: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:15:17.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6491" for this suite. • [SLOW TEST:36.078 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":275,"skipped":4548,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:15:17.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Aug 17 13:15:17.474: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Aug 17 13:15:17.515: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 17 13:15:17.516: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Aug 17 13:15:17.552: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 17 13:15:17.552: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Aug 17 13:15:17.608: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Aug 17 13:15:17.608: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Aug 17 13:15:24.883: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:15:24.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8448" for this suite. • [SLOW TEST:7.937 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":276,"skipped":4548,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:15:25.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Aug 17 13:15:25.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config api-versions' Aug 17 13:15:27.356: INFO: stderr: "" Aug 17 13:15:27.356: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:15:27.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-101" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":277,"skipped":4553,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:15:27.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-53d52538-affe-4f16-8715-c2ba95e9443d STEP: Creating a pod to test consume secrets Aug 17 13:15:27.700: INFO: Waiting up to 5m0s for pod "pod-secrets-f51f71aa-d7c9-43e1-b9ea-8b2e4de42fe0" in namespace "secrets-6069" to be "Succeeded or Failed" Aug 17 13:15:27.732: INFO: Pod "pod-secrets-f51f71aa-d7c9-43e1-b9ea-8b2e4de42fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 31.714648ms Aug 17 13:15:29.743: INFO: Pod "pod-secrets-f51f71aa-d7c9-43e1-b9ea-8b2e4de42fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043290592s Aug 17 13:15:31.761: INFO: Pod "pod-secrets-f51f71aa-d7c9-43e1-b9ea-8b2e4de42fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060868828s Aug 17 13:15:33.798: INFO: Pod "pod-secrets-f51f71aa-d7c9-43e1-b9ea-8b2e4de42fe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097886637s STEP: Saw pod success Aug 17 13:15:33.798: INFO: Pod "pod-secrets-f51f71aa-d7c9-43e1-b9ea-8b2e4de42fe0" satisfied condition "Succeeded or Failed" Aug 17 13:15:33.802: INFO: Trying to get logs from node latest-worker pod pod-secrets-f51f71aa-d7c9-43e1-b9ea-8b2e4de42fe0 container secret-volume-test: STEP: delete the pod Aug 17 13:15:33.871: INFO: Waiting for pod pod-secrets-f51f71aa-d7c9-43e1-b9ea-8b2e4de42fe0 to disappear Aug 17 13:15:34.061: INFO: Pod pod-secrets-f51f71aa-d7c9-43e1-b9ea-8b2e4de42fe0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:15:34.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6069" for this suite. • [SLOW TEST:6.949 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":278,"skipped":4579,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:15:34.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 17 13:15:44.630: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bf96c4f9-84c0-4784-91c3-640384b23f78" Aug 17 13:15:44.631: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bf96c4f9-84c0-4784-91c3-640384b23f78" in namespace "pods-6216" to be "terminated due to deadline exceeded" Aug 17 13:15:44.836: INFO: Pod "pod-update-activedeadlineseconds-bf96c4f9-84c0-4784-91c3-640384b23f78": Phase="Running", Reason="", readiness=true. Elapsed: 204.72765ms Aug 17 13:15:46.841: INFO: Pod "pod-update-activedeadlineseconds-bf96c4f9-84c0-4784-91c3-640384b23f78": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.209814278s Aug 17 13:15:46.841: INFO: Pod "pod-update-activedeadlineseconds-bf96c4f9-84c0-4784-91c3-640384b23f78" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:15:46.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6216" for this suite. • [SLOW TEST:12.517 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":279,"skipped":4581,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:15:46.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 17 13:15:47.039: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:15:59.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7192" for this suite. • [SLOW TEST:12.837 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":280,"skipped":4618,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:15:59.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 17 13:16:00.688: INFO: Pod name wrapped-volume-race-e99eefaa-cd59-4be4-a4a7-fe713aefff64: Found 0 pods out of 5 Aug 17 13:16:05.713: INFO: Pod name wrapped-volume-race-e99eefaa-cd59-4be4-a4a7-fe713aefff64: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e99eefaa-cd59-4be4-a4a7-fe713aefff64 in namespace emptydir-wrapper-3108, will wait for the garbage collector to delete the pods Aug 17 13:16:21.985: INFO: Deleting ReplicationController wrapped-volume-race-e99eefaa-cd59-4be4-a4a7-fe713aefff64 took: 38.930687ms Aug 17 13:16:22.486: INFO: Terminating ReplicationController wrapped-volume-race-e99eefaa-cd59-4be4-a4a7-fe713aefff64 pods took: 500.645302ms STEP: Creating RC which spawns configmap-volume pods Aug 17 13:16:40.683: INFO: Pod name wrapped-volume-race-3d5c8f94-6c42-4dac-b901-be6be575962b: Found 0 pods out of 5 Aug 17 13:16:45.702: INFO: Pod name wrapped-volume-race-3d5c8f94-6c42-4dac-b901-be6be575962b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3d5c8f94-6c42-4dac-b901-be6be575962b in namespace emptydir-wrapper-3108, will wait for the garbage collector to delete the pods Aug 17 13:17:02.685: INFO: Deleting ReplicationController wrapped-volume-race-3d5c8f94-6c42-4dac-b901-be6be575962b took: 48.564199ms Aug 17 13:17:03.185: INFO: Terminating ReplicationController wrapped-volume-race-3d5c8f94-6c42-4dac-b901-be6be575962b pods took: 500.620032ms STEP: Creating RC which spawns configmap-volume pods Aug 17 13:17:24.040: INFO: Pod name wrapped-volume-race-c1f0413d-0682-40f0-a1c4-84baecdcb1a1: Found 0 pods out of 5 Aug 17 13:17:29.121: INFO: Pod name wrapped-volume-race-c1f0413d-0682-40f0-a1c4-84baecdcb1a1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c1f0413d-0682-40f0-a1c4-84baecdcb1a1 in namespace emptydir-wrapper-3108, will wait for the garbage collector to delete the pods Aug 17 13:17:51.239: INFO: Deleting ReplicationController wrapped-volume-race-c1f0413d-0682-40f0-a1c4-84baecdcb1a1 took: 8.581221ms Aug 17 13:17:51.740: INFO: Terminating ReplicationController wrapped-volume-race-c1f0413d-0682-40f0-a1c4-84baecdcb1a1 pods took: 500.46355ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:18:01.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3108" for this suite. • [SLOW TEST:121.786 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":281,"skipped":4622,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:18:01.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:18:08.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-763" for this suite. • [SLOW TEST:7.318 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":282,"skipped":4647,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:18:08.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 17 13:18:09.605: INFO: Waiting up to 5m0s for pod "pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6" in namespace "emptydir-4638" to be "Succeeded or Failed" Aug 17 13:18:09.636: INFO: Pod "pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.451942ms Aug 17 13:18:12.154: INFO: Pod "pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.548445328s Aug 17 13:18:14.181: INFO: Pod "pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575057233s Aug 17 13:18:16.202: INFO: Pod "pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596297946s Aug 17 13:18:18.778: INFO: Pod "pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.172209681s STEP: Saw pod success Aug 17 13:18:18.778: INFO: Pod "pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6" satisfied condition "Succeeded or Failed" Aug 17 13:18:18.826: INFO: Trying to get logs from node latest-worker2 pod pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6 container test-container: STEP: delete the pod Aug 17 13:18:18.917: INFO: Waiting for pod pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6 to disappear Aug 17 13:18:18.922: INFO: Pod pod-30762a5e-6a8a-4da6-824d-3fa412b83cb6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:18:18.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4638" for this suite. • [SLOW TEST:10.144 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":283,"skipped":4653,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:18:18.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-6c5ff6d6-cf3d-4537-89b9-d93d2f75283e STEP: Creating a pod to test consume secrets Aug 17 13:18:19.709: INFO: Waiting up to 5m0s for pod "pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66" in namespace "secrets-9853" to be "Succeeded or Failed" Aug 17 13:18:19.872: INFO: Pod "pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66": Phase="Pending", Reason="", readiness=false. Elapsed: 161.928619ms Aug 17 13:18:21.899: INFO: Pod "pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189640315s Aug 17 13:18:24.289: INFO: Pod "pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.578780888s Aug 17 13:18:26.364: INFO: Pod "pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66": Phase="Running", Reason="", readiness=true. Elapsed: 6.65382912s Aug 17 13:18:28.471: INFO: Pod "pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.761643633s STEP: Saw pod success Aug 17 13:18:28.472: INFO: Pod "pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66" satisfied condition "Succeeded or Failed" Aug 17 13:18:28.499: INFO: Trying to get logs from node latest-worker pod pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66 container secret-env-test: STEP: delete the pod Aug 17 13:18:28.639: INFO: Waiting for pod pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66 to disappear Aug 17 13:18:28.683: INFO: Pod pod-secrets-cb7cd52a-7b06-4b82-8cf9-6a7ce673ac66 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:18:28.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9853" for this suite. • [SLOW TEST:9.819 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4673,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:18:28.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Aug 17 13:18:29.276: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Aug 17 13:18:29.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5093' Aug 17 13:18:32.936: INFO: stderr: "" Aug 17 13:18:32.936: INFO: stdout: "service/agnhost-replica created\n" Aug 17 13:18:32.937: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Aug 17 13:18:32.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5093' Aug 17 13:18:37.083: INFO: stderr: "" Aug 17 13:18:37.083: INFO: stdout: "service/agnhost-primary created\n" Aug 17 13:18:37.085: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 17 13:18:37.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5093' Aug 17 13:18:40.725: INFO: stderr: "" Aug 17 13:18:40.725: INFO: stdout: "service/frontend created\n" Aug 17 13:18:40.726: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 17 13:18:40.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5093' Aug 17 13:18:43.435: INFO: stderr: "" Aug 17 13:18:43.435: INFO: stdout: "deployment.apps/frontend created\n" Aug 17 13:18:43.437: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 17 13:18:43.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5093' Aug 17 13:18:47.289: INFO: stderr: "" Aug 17 13:18:47.289: INFO: stdout: "deployment.apps/agnhost-primary created\n" Aug 17 13:18:47.291: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 17 13:18:47.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5093' Aug 17 13:18:50.823: INFO: stderr: "" Aug 17 13:18:50.823: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Aug 17 13:18:50.824: INFO: Waiting for all frontend pods to be Running. Aug 17 13:18:55.876: INFO: Waiting for frontend to serve content. Aug 17 13:18:57.136: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Aug 17 13:19:02.148: INFO: Trying to add a new entry to the guestbook. Aug 17 13:19:02.159: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 17 13:19:02.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5093' Aug 17 13:19:03.748: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 13:19:03.748: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Aug 17 13:19:03.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5093' Aug 17 13:19:06.032: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 13:19:06.032: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 17 13:19:06.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5093' Aug 17 13:19:08.167: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 13:19:08.168: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 17 13:19:08.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5093' Aug 17 13:19:09.581: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 13:19:09.582: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 17 13:19:09.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5093' Aug 17 13:19:11.329: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 13:19:11.330: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 17 13:19:11.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5093' Aug 17 13:19:12.938: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 17 13:19:12.938: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:19:12.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5093" for this suite. • [SLOW TEST:44.628 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":285,"skipped":4681,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:19:13.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 17 13:19:13.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-867b0bfa-18f4-4851-8e3e-f57770a6df7e" in namespace "projected-9639" to be "Succeeded or Failed" Aug 17 13:19:13.746: INFO: Pod "downwardapi-volume-867b0bfa-18f4-4851-8e3e-f57770a6df7e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031382ms Aug 17 13:19:15.957: INFO: Pod "downwardapi-volume-867b0bfa-18f4-4851-8e3e-f57770a6df7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221005804s Aug 17 13:19:17.965: INFO: Pod "downwardapi-volume-867b0bfa-18f4-4851-8e3e-f57770a6df7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228635313s Aug 17 13:19:19.998: INFO: Pod "downwardapi-volume-867b0bfa-18f4-4851-8e3e-f57770a6df7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.262219083s STEP: Saw pod success Aug 17 13:19:19.999: INFO: Pod "downwardapi-volume-867b0bfa-18f4-4851-8e3e-f57770a6df7e" satisfied condition "Succeeded or Failed" Aug 17 13:19:20.004: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-867b0bfa-18f4-4851-8e3e-f57770a6df7e container client-container: STEP: delete the pod Aug 17 13:19:20.399: INFO: Waiting for pod downwardapi-volume-867b0bfa-18f4-4851-8e3e-f57770a6df7e to disappear Aug 17 13:19:20.410: INFO: Pod downwardapi-volume-867b0bfa-18f4-4851-8e3e-f57770a6df7e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:19:20.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9639" for this suite. • [SLOW TEST:7.021 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4688,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:19:20.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-3d8e1036-3155-4bb1-bb2f-5fad9371df14 STEP: Creating a pod to test consume secrets Aug 17 13:19:20.695: INFO: Waiting up to 5m0s for pod "pod-secrets-c0998c27-7781-4561-86a3-ba3edcf7ef1e" in namespace "secrets-5419" to be "Succeeded or Failed" Aug 17 13:19:20.891: INFO: Pod "pod-secrets-c0998c27-7781-4561-86a3-ba3edcf7ef1e": Phase="Pending", Reason="", readiness=false. Elapsed: 194.935057ms Aug 17 13:19:22.898: INFO: Pod "pod-secrets-c0998c27-7781-4561-86a3-ba3edcf7ef1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202483455s Aug 17 13:19:24.905: INFO: Pod "pod-secrets-c0998c27-7781-4561-86a3-ba3edcf7ef1e": Phase="Running", Reason="", readiness=true. Elapsed: 4.209365686s Aug 17 13:19:26.945: INFO: Pod "pod-secrets-c0998c27-7781-4561-86a3-ba3edcf7ef1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24955548s STEP: Saw pod success Aug 17 13:19:26.945: INFO: Pod "pod-secrets-c0998c27-7781-4561-86a3-ba3edcf7ef1e" satisfied condition "Succeeded or Failed" Aug 17 13:19:26.951: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c0998c27-7781-4561-86a3-ba3edcf7ef1e container secret-volume-test: STEP: delete the pod Aug 17 13:19:26.975: INFO: Waiting for pod pod-secrets-c0998c27-7781-4561-86a3-ba3edcf7ef1e to disappear Aug 17 13:19:27.013: INFO: Pod pod-secrets-c0998c27-7781-4561-86a3-ba3edcf7ef1e no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:19:27.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5419" for this suite. STEP: Destroying namespace "secret-namespace-3708" for this suite. • [SLOW TEST:6.613 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":287,"skipped":4698,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:19:27.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-18319a71-bc6f-4437-865a-9dc4256723f7 STEP: Creating a pod to test consume configMaps Aug 17 13:19:27.168: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8c30113e-f5c1-422a-8260-9e8d174a4afa" in namespace "projected-4022" to be "Succeeded or Failed" Aug 17 13:19:27.215: INFO: Pod "pod-projected-configmaps-8c30113e-f5c1-422a-8260-9e8d174a4afa": Phase="Pending", Reason="", readiness=false. Elapsed: 46.655111ms Aug 17 13:19:29.223: INFO: Pod "pod-projected-configmaps-8c30113e-f5c1-422a-8260-9e8d174a4afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054897519s Aug 17 13:19:31.251: INFO: Pod "pod-projected-configmaps-8c30113e-f5c1-422a-8260-9e8d174a4afa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082182019s STEP: Saw pod success Aug 17 13:19:31.251: INFO: Pod "pod-projected-configmaps-8c30113e-f5c1-422a-8260-9e8d174a4afa" satisfied condition "Succeeded or Failed" Aug 17 13:19:31.258: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-8c30113e-f5c1-422a-8260-9e8d174a4afa container projected-configmap-volume-test: STEP: delete the pod Aug 17 13:19:31.288: INFO: Waiting for pod pod-projected-configmaps-8c30113e-f5c1-422a-8260-9e8d174a4afa to disappear Aug 17 13:19:31.566: INFO: Pod pod-projected-configmaps-8c30113e-f5c1-422a-8260-9e8d174a4afa no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:19:31.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4022" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4706,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:19:31.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Aug 17 13:19:31.800: INFO: Waiting up to 5m0s for pod "var-expansion-b900943f-9654-4e3f-a3e6-a612a90fca7c" in namespace "var-expansion-5316" to be "Succeeded or Failed" Aug 17 13:19:31.855: INFO: Pod "var-expansion-b900943f-9654-4e3f-a3e6-a612a90fca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 54.18429ms Aug 17 13:19:33.932: INFO: Pod "var-expansion-b900943f-9654-4e3f-a3e6-a612a90fca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131321505s Aug 17 13:19:36.047: INFO: Pod "var-expansion-b900943f-9654-4e3f-a3e6-a612a90fca7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245986575s Aug 17 13:19:38.054: INFO: Pod "var-expansion-b900943f-9654-4e3f-a3e6-a612a90fca7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.253593645s STEP: Saw pod success Aug 17 13:19:38.054: INFO: Pod "var-expansion-b900943f-9654-4e3f-a3e6-a612a90fca7c" satisfied condition "Succeeded or Failed" Aug 17 13:19:38.064: INFO: Trying to get logs from node latest-worker2 pod var-expansion-b900943f-9654-4e3f-a3e6-a612a90fca7c container dapi-container: STEP: delete the pod Aug 17 13:19:38.099: INFO: Waiting for pod var-expansion-b900943f-9654-4e3f-a3e6-a612a90fca7c to disappear Aug 17 13:19:38.117: INFO: Pod var-expansion-b900943f-9654-4e3f-a3e6-a612a90fca7c no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:19:38.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5316" for this suite. • [SLOW TEST:6.549 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":289,"skipped":4713,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:19:38.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-5b085744-f5ef-4829-9df8-f8227c339af0 STEP: Creating a pod to test consume secrets Aug 17 13:19:38.301: INFO: Waiting up to 5m0s for pod "pod-secrets-880ec471-3dfd-4460-8d1a-c094cdf545f3" in namespace "secrets-3125" to be "Succeeded or Failed" Aug 17 13:19:38.323: INFO: Pod "pod-secrets-880ec471-3dfd-4460-8d1a-c094cdf545f3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.688228ms Aug 17 13:19:40.422: INFO: Pod "pod-secrets-880ec471-3dfd-4460-8d1a-c094cdf545f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120545094s Aug 17 13:19:42.482: INFO: Pod "pod-secrets-880ec471-3dfd-4460-8d1a-c094cdf545f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.18097449s STEP: Saw pod success Aug 17 13:19:42.482: INFO: Pod "pod-secrets-880ec471-3dfd-4460-8d1a-c094cdf545f3" satisfied condition "Succeeded or Failed" Aug 17 13:19:42.502: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-880ec471-3dfd-4460-8d1a-c094cdf545f3 container secret-volume-test: STEP: delete the pod Aug 17 13:19:42.727: INFO: Waiting for pod pod-secrets-880ec471-3dfd-4460-8d1a-c094cdf545f3 to disappear Aug 17 13:19:42.753: INFO: Pod pod-secrets-880ec471-3dfd-4460-8d1a-c094cdf545f3 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:19:42.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3125" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":290,"skipped":4723,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:19:42.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 13:19:46.222: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 13:19:48.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:19:50.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:19:54.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267186, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 13:19:57.302: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:19:57.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5904" for this suite. STEP: Destroying namespace "webhook-5904-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.278 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":291,"skipped":4728,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:19:58.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Aug 17 13:19:58.644: INFO: created pod pod-service-account-defaultsa Aug 17 13:19:58.644: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 17 13:19:58.892: INFO: created pod pod-service-account-mountsa Aug 17 13:19:58.892: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 17 13:19:59.143: INFO: created pod pod-service-account-nomountsa Aug 17 13:19:59.143: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 17 13:19:59.219: INFO: created pod pod-service-account-defaultsa-mountspec Aug 17 13:19:59.219: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 17 13:19:59.288: INFO: created pod pod-service-account-mountsa-mountspec Aug 17 13:19:59.288: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 17 13:19:59.322: INFO: created pod pod-service-account-nomountsa-mountspec Aug 17 13:19:59.322: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 17 13:19:59.714: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 17 13:19:59.714: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 17 13:19:59.778: INFO: created pod pod-service-account-mountsa-nomountspec Aug 17 13:19:59.779: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 17 13:20:00.017: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 17 13:20:00.017: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:20:00.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8803" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":292,"skipped":4788,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:20:00.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 13:20:05.108: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 13:20:07.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:20:10.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:20:12.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:20:13.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:20:16.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 13:20:18.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267205, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 13:20:22.241: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 13:20:22.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2174-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:20:23.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8511" for this suite. STEP: Destroying namespace "webhook-8511-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.896 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":293,"skipped":4800,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:20:25.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 17 13:20:30.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 17 13:20:33.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267230, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267230, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267230, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733267230, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 17 13:20:37.604: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 17 13:20:41.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config attach --namespace=webhook-2949 to-be-attached-pod -i -c=container1' Aug 17 13:20:43.271: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:20:43.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2949" for this suite. STEP: Destroying namespace "webhook-2949-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.415 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":294,"skipped":4806,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:20:43.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-a27f8c8b-6283-4b9e-adab-063813222bc1 STEP: Creating a pod to test consume secrets Aug 17 13:20:43.922: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-496e7d48-e350-4540-bfad-9a5953dee60a" in namespace "projected-7592" to be "Succeeded or Failed" Aug 17 13:20:44.301: INFO: Pod "pod-projected-secrets-496e7d48-e350-4540-bfad-9a5953dee60a": Phase="Pending", Reason="", readiness=false. Elapsed: 377.871944ms Aug 17 13:20:46.308: INFO: Pod "pod-projected-secrets-496e7d48-e350-4540-bfad-9a5953dee60a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385134663s Aug 17 13:20:48.315: INFO: Pod "pod-projected-secrets-496e7d48-e350-4540-bfad-9a5953dee60a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.392449093s Aug 17 13:20:50.323: INFO: Pod "pod-projected-secrets-496e7d48-e350-4540-bfad-9a5953dee60a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.40040836s STEP: Saw pod success Aug 17 13:20:50.323: INFO: Pod "pod-projected-secrets-496e7d48-e350-4540-bfad-9a5953dee60a" satisfied condition "Succeeded or Failed" Aug 17 13:20:50.329: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-496e7d48-e350-4540-bfad-9a5953dee60a container secret-volume-test: STEP: delete the pod Aug 17 13:20:50.367: INFO: Waiting for pod pod-projected-secrets-496e7d48-e350-4540-bfad-9a5953dee60a to disappear Aug 17 13:20:50.378: INFO: Pod pod-projected-secrets-496e7d48-e350-4540-bfad-9a5953dee60a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:20:50.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7592" for this suite. • [SLOW TEST:6.819 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":295,"skipped":4810,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:20:50.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 17 13:20:50.482: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 17 13:21:12.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8442 create -f -' Aug 17 13:21:17.814: INFO: stderr: "" Aug 17 13:21:17.814: INFO: stdout: "e2e-test-crd-publish-openapi-1058-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 17 13:21:17.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8442 delete e2e-test-crd-publish-openapi-1058-crds test-foo' Aug 17 13:21:19.292: INFO: stderr: "" Aug 17 13:21:19.292: INFO: stdout: "e2e-test-crd-publish-openapi-1058-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 17 13:21:19.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8442 apply -f -' Aug 17 13:21:23.274: INFO: stderr: "" Aug 17 13:21:23.274: INFO: stdout: "e2e-test-crd-publish-openapi-1058-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 17 13:21:23.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8442 delete e2e-test-crd-publish-openapi-1058-crds test-foo' Aug 17 13:21:25.426: INFO: stderr: "" Aug 17 13:21:25.426: INFO: stdout: "e2e-test-crd-publish-openapi-1058-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 17 13:21:25.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8442 create -f -' Aug 17 13:21:28.684: INFO: rc: 1 Aug 17 13:21:28.685: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8442 apply -f -' Aug 17 13:21:31.120: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 17 13:21:31.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8442 create -f -' Aug 17 13:21:34.706: INFO: rc: 1 Aug 17 13:21:34.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8442 apply -f -' Aug 17 13:21:37.869: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 17 13:21:37.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1058-crds' Aug 17 13:21:41.540: INFO: stderr: "" Aug 17 13:21:41.540: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1058-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 17 13:21:41.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1058-crds.metadata' Aug 17 13:21:44.075: INFO: stderr: "" Aug 17 13:21:44.075: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1058-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 17 13:21:44.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1058-crds.spec' Aug 17 13:21:47.272: INFO: stderr: "" Aug 17 13:21:47.272: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1058-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 17 13:21:47.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1058-crds.spec.bars' Aug 17 13:21:50.860: INFO: stderr: "" Aug 17 13:21:50.860: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1058-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 17 13:21:50.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1058-crds.spec.bars2' Aug 17 13:21:53.197: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:22:14.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8442" for this suite. • [SLOW TEST:84.200 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":296,"skipped":4814,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:22:14.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 17 13:22:14.657: INFO: Waiting up to 5m0s for pod "pod-963c3e20-166f-4273-9a62-21f2c872a8d3" in namespace "emptydir-9774" to be "Succeeded or Failed" Aug 17 13:22:14.701: INFO: Pod "pod-963c3e20-166f-4273-9a62-21f2c872a8d3": Phase="Pending", Reason="", readiness=false. Elapsed: 43.35336ms Aug 17 13:22:16.894: INFO: Pod "pod-963c3e20-166f-4273-9a62-21f2c872a8d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236502887s Aug 17 13:22:19.085: INFO: Pod "pod-963c3e20-166f-4273-9a62-21f2c872a8d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427958912s Aug 17 13:22:21.092: INFO: Pod "pod-963c3e20-166f-4273-9a62-21f2c872a8d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.434468739s STEP: Saw pod success Aug 17 13:22:21.092: INFO: Pod "pod-963c3e20-166f-4273-9a62-21f2c872a8d3" satisfied condition "Succeeded or Failed" Aug 17 13:22:21.199: INFO: Trying to get logs from node latest-worker pod pod-963c3e20-166f-4273-9a62-21f2c872a8d3 container test-container: STEP: delete the pod Aug 17 13:22:21.291: INFO: Waiting for pod pod-963c3e20-166f-4273-9a62-21f2c872a8d3 to disappear Aug 17 13:22:21.317: INFO: Pod pod-963c3e20-166f-4273-9a62-21f2c872a8d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:22:21.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9774" for this suite. • [SLOW TEST:6.730 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":297,"skipped":4872,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:22:21.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 17 13:22:21.464: INFO: Waiting up to 1m0s for all nodes to be ready Aug 17 13:23:21.549: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 17 13:23:21.592: INFO: Created pod: pod0-sched-preemption-low-priority Aug 17 13:23:21.993: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:23:46.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-653" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:85.990 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":298,"skipped":4886,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:23:47.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 17 13:23:54.687: INFO: Successfully updated pod "annotationupdatefad87a6a-2d2c-4ef9-9650-6b32d5f98af9" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:23:56.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7594" for this suite. • [SLOW TEST:9.419 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":299,"skipped":4888,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:23:56.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:24:01.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5155" for this suite. • [SLOW TEST:5.151 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":300,"skipped":4896,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 17 13:24:01.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-392548cb-2b0a-4ea9-a90e-0481590442c5 STEP: Creating a pod to test consume secrets Aug 17 13:24:02.193: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd" in namespace "projected-4057" to be "Succeeded or Failed" Aug 17 13:24:02.233: INFO: Pod "pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd": Phase="Pending", Reason="", readiness=false. Elapsed: 39.697963ms Aug 17 13:24:04.246: INFO: Pod "pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052468503s Aug 17 13:24:06.350: INFO: Pod "pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156430526s Aug 17 13:24:08.561: INFO: Pod "pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd": Phase="Running", Reason="", readiness=true. Elapsed: 6.367073066s Aug 17 13:24:10.782: INFO: Pod "pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.58829021s STEP: Saw pod success Aug 17 13:24:10.782: INFO: Pod "pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd" satisfied condition "Succeeded or Failed" Aug 17 13:24:10.820: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd container projected-secret-volume-test: STEP: delete the pod Aug 17 13:24:11.444: INFO: Waiting for pod pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd to disappear Aug 17 13:24:11.466: INFO: Pod pod-projected-secrets-5f29a8b5-db7a-416a-921d-e62030f345cd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 17 13:24:11.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4057" for this suite. • [SLOW TEST:9.583 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":301,"skipped":4913,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSAug 17 13:24:11.482: INFO: Running AfterSuite actions on all nodes Aug 17 13:24:11.483: INFO: Running AfterSuite actions on node 1 Aug 17 13:24:11.483: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":301,"skipped":4934,"failed":2,"failures":["[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]} Summarizing 2 Failures: [Fail] [k8s.io] Probing container [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:71 [Fail] [sig-apps] Daemon set [Serial] [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:291 Ran 303 of 5237 Specs in 9034.233 seconds FAIL! -- 301 Passed | 2 Failed | 0 Pending | 4934 Skipped --- FAIL: TestE2E (9035.25s) FAIL