I0915 10:31:48.183667 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0915 10:31:48.183832 7 e2e.go:129] Starting e2e run "a90abe08-8b9b-4d48-a9dd-629358f843a9" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1600165906 - Will randomize all specs Will run 303 of 5232 specs Sep 15 10:31:48.247: INFO: >>> kubeConfig: /root/.kube/config Sep 15 10:31:48.249: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 15 10:31:48.268: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 15 10:31:48.307: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 15 10:31:48.307: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 15 10:31:48.307: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 15 10:31:48.315: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 15 10:31:48.315: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 15 10:31:48.315: INFO: e2e test version: v1.19.1 Sep 15 10:31:48.316: INFO: kube-apiserver version: v1.19.0 Sep 15 10:31:48.316: INFO: >>> kubeConfig: /root/.kube/config Sep 15 10:31:48.320: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:31:48.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Sep 15 10:31:48.420: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Sep 15 10:31:58.462: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-43 PodName:pod-sharedvolume-2357f483-a5e7-45a4-8e95-0d583448dc00 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 10:31:58.462: INFO: >>> kubeConfig: /root/.kube/config I0915 10:31:58.494685 7 log.go:181] (0xc002f9ae70) (0xc00359b040) Create stream I0915 10:31:58.494712 7 log.go:181] (0xc002f9ae70) (0xc00359b040) Stream added, broadcasting: 1 I0915 10:31:58.496975 7 log.go:181] (0xc002f9ae70) Reply frame received for 1 I0915 10:31:58.497029 7 log.go:181] (0xc002f9ae70) (0xc0032c19a0) Create stream I0915 10:31:58.497050 7 log.go:181] (0xc002f9ae70) (0xc0032c19a0) Stream added, broadcasting: 3 I0915 10:31:58.497998 7 log.go:181] (0xc002f9ae70) Reply frame received for 3 I0915 10:31:58.498026 7 log.go:181] (0xc002f9ae70) (0xc0032c1a40) Create stream I0915 10:31:58.498038 7 log.go:181] (0xc002f9ae70) (0xc0032c1a40) Stream added, broadcasting: 5 I0915 10:31:58.499152 7 log.go:181] (0xc002f9ae70) Reply frame received for 5 I0915 10:31:58.576426 7 log.go:181] (0xc002f9ae70) Data frame received for 3 I0915 10:31:58.576460 7 log.go:181] (0xc0032c19a0) (3) Data frame handling I0915 10:31:58.576496 7 log.go:181] (0xc002f9ae70) Data frame received for 5 I0915 10:31:58.576512 7 log.go:181] (0xc0032c1a40) (5) Data frame handling I0915 10:31:58.576536 7 log.go:181] (0xc0032c19a0) (3) Data frame sent I0915 10:31:58.576570 7 log.go:181] (0xc002f9ae70) Data frame received for 3 I0915 10:31:58.576588 7 log.go:181] (0xc0032c19a0) (3) Data frame handling I0915 10:31:58.578423 7 log.go:181] (0xc002f9ae70) Data frame received for 1 I0915 10:31:58.578435 7 log.go:181] (0xc00359b040) (1) Data frame handling I0915 10:31:58.578447 7 log.go:181] (0xc00359b040) (1) Data frame sent I0915 10:31:58.578458 7 log.go:181] (0xc002f9ae70) (0xc00359b040) Stream removed, broadcasting: 1 I0915 10:31:58.578470 7 log.go:181] (0xc002f9ae70) Go away received I0915 10:31:58.578983 7 log.go:181] (0xc002f9ae70) (0xc00359b040) Stream removed, broadcasting: 1 I0915 10:31:58.579004 7 log.go:181] (0xc002f9ae70) (0xc0032c19a0) Stream removed, broadcasting: 3 I0915 10:31:58.579015 7 log.go:181] (0xc002f9ae70) (0xc0032c1a40) Stream removed, broadcasting: 5 Sep 15 10:31:58.579: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:31:58.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-43" for this suite. • [SLOW TEST:10.270 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":1,"skipped":98,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:31:58.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:31:58.652: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fdde36a0-2ad0-4a5c-b14d-a3f2dbc52393" in namespace "security-context-test-5350" to be "Succeeded or Failed" Sep 15 10:31:58.744: INFO: Pod "busybox-user-65534-fdde36a0-2ad0-4a5c-b14d-a3f2dbc52393": Phase="Pending", Reason="", readiness=false. Elapsed: 92.016465ms Sep 15 10:32:00.753: INFO: Pod "busybox-user-65534-fdde36a0-2ad0-4a5c-b14d-a3f2dbc52393": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100887391s Sep 15 10:32:02.757: INFO: Pod "busybox-user-65534-fdde36a0-2ad0-4a5c-b14d-a3f2dbc52393": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10491879s Sep 15 10:32:04.966: INFO: Pod "busybox-user-65534-fdde36a0-2ad0-4a5c-b14d-a3f2dbc52393": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314273853s Sep 15 10:32:06.971: INFO: Pod "busybox-user-65534-fdde36a0-2ad0-4a5c-b14d-a3f2dbc52393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.319313067s Sep 15 10:32:06.971: INFO: Pod "busybox-user-65534-fdde36a0-2ad0-4a5c-b14d-a3f2dbc52393" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:32:06.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5350" for this suite. • [SLOW TEST:8.391 seconds] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":2,"skipped":117,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:32:06.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:32:07.035: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 15 10:32:10.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3808 create -f -' Sep 15 10:32:13.330: INFO: stderr: "" Sep 15 10:32:13.330: INFO: stdout: "e2e-test-crd-publish-openapi-4004-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 15 10:32:13.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3808 delete e2e-test-crd-publish-openapi-4004-crds test-cr' Sep 15 10:32:13.437: INFO: stderr: "" Sep 15 10:32:13.437: INFO: stdout: "e2e-test-crd-publish-openapi-4004-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Sep 15 10:32:13.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3808 apply -f -' Sep 15 10:32:13.728: INFO: stderr: "" Sep 15 10:32:13.729: INFO: stdout: "e2e-test-crd-publish-openapi-4004-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 15 10:32:13.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3808 delete e2e-test-crd-publish-openapi-4004-crds test-cr' Sep 15 10:32:13.839: INFO: stderr: "" Sep 15 10:32:13.839: INFO: stdout: "e2e-test-crd-publish-openapi-4004-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 15 10:32:13.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4004-crds' Sep 15 10:32:14.117: INFO: stderr: "" Sep 15 10:32:14.117: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4004-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:32:17.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3808" for this suite. • [SLOW TEST:10.103 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":3,"skipped":130,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:32:17.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-48c75528-ac3f-4bb6-a06a-6a40c5def3ab [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:32:17.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2392" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":4,"skipped":131,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:32:17.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 15 10:32:17.564: INFO: Waiting up to 5m0s for pod "pod-1f71b1e5-3ea3-439c-be7c-75278780ea44" in namespace "emptydir-4693" to be "Succeeded or Failed" Sep 15 10:32:17.573: INFO: Pod "pod-1f71b1e5-3ea3-439c-be7c-75278780ea44": Phase="Pending", Reason="", readiness=false. Elapsed: 9.339659ms Sep 15 10:32:19.584: INFO: Pod "pod-1f71b1e5-3ea3-439c-be7c-75278780ea44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020662035s Sep 15 10:32:21.588: INFO: Pod "pod-1f71b1e5-3ea3-439c-be7c-75278780ea44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024431705s Sep 15 10:32:23.596: INFO: Pod "pod-1f71b1e5-3ea3-439c-be7c-75278780ea44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03253927s STEP: Saw pod success Sep 15 10:32:23.596: INFO: Pod "pod-1f71b1e5-3ea3-439c-be7c-75278780ea44" satisfied condition "Succeeded or Failed" Sep 15 10:32:23.612: INFO: Trying to get logs from node kali-worker pod pod-1f71b1e5-3ea3-439c-be7c-75278780ea44 container test-container: STEP: delete the pod Sep 15 10:32:23.669: INFO: Waiting for pod pod-1f71b1e5-3ea3-439c-be7c-75278780ea44 to disappear Sep 15 10:32:23.762: INFO: Pod pod-1f71b1e5-3ea3-439c-be7c-75278780ea44 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:32:23.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4693" for this suite. • [SLOW TEST:6.408 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":5,"skipped":144,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:32:23.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Sep 15 10:32:24.105: INFO: Waiting up to 5m0s for pod "client-containers-46a37481-b628-441a-bba5-0aeafc62398d" in namespace "containers-4539" to be "Succeeded or Failed" Sep 15 10:32:24.123: INFO: Pod "client-containers-46a37481-b628-441a-bba5-0aeafc62398d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.784852ms Sep 15 10:32:26.206: INFO: Pod "client-containers-46a37481-b628-441a-bba5-0aeafc62398d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100294692s Sep 15 10:32:28.210: INFO: Pod "client-containers-46a37481-b628-441a-bba5-0aeafc62398d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104822444s STEP: Saw pod success Sep 15 10:32:28.210: INFO: Pod "client-containers-46a37481-b628-441a-bba5-0aeafc62398d" satisfied condition "Succeeded or Failed" Sep 15 10:32:28.213: INFO: Trying to get logs from node kali-worker pod client-containers-46a37481-b628-441a-bba5-0aeafc62398d container test-container: STEP: delete the pod Sep 15 10:32:28.234: INFO: Waiting for pod client-containers-46a37481-b628-441a-bba5-0aeafc62398d to disappear Sep 15 10:32:28.244: INFO: Pod client-containers-46a37481-b628-441a-bba5-0aeafc62398d no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:32:28.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4539" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:32:28.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 10:32:28.897: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 10:32:30.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762748, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762748, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762749, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762748, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 10:32:33.960: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:32:44.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1532" for this suite. STEP: Destroying namespace "webhook-1532-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.580 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":7,"skipped":164,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:32:44.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-b8f6eece-8def-4ee9-ad47-e25e99761a8c STEP: Creating a pod to test consume configMaps Sep 15 10:32:44.923: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b4b4a22e-f86b-4c5f-9fd2-301a8c3422ec" in namespace "projected-8175" to be "Succeeded or Failed" Sep 15 10:32:45.044: INFO: Pod "pod-projected-configmaps-b4b4a22e-f86b-4c5f-9fd2-301a8c3422ec": Phase="Pending", Reason="", readiness=false. Elapsed: 120.766181ms Sep 15 10:32:47.048: INFO: Pod "pod-projected-configmaps-b4b4a22e-f86b-4c5f-9fd2-301a8c3422ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124980272s Sep 15 10:32:49.059: INFO: Pod "pod-projected-configmaps-b4b4a22e-f86b-4c5f-9fd2-301a8c3422ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135743043s STEP: Saw pod success Sep 15 10:32:49.059: INFO: Pod "pod-projected-configmaps-b4b4a22e-f86b-4c5f-9fd2-301a8c3422ec" satisfied condition "Succeeded or Failed" Sep 15 10:32:49.062: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-b4b4a22e-f86b-4c5f-9fd2-301a8c3422ec container projected-configmap-volume-test: STEP: delete the pod Sep 15 10:32:49.110: INFO: Waiting for pod pod-projected-configmaps-b4b4a22e-f86b-4c5f-9fd2-301a8c3422ec to disappear Sep 15 10:32:49.119: INFO: Pod pod-projected-configmaps-b4b4a22e-f86b-4c5f-9fd2-301a8c3422ec no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:32:49.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8175" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:32:49.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Sep 15 10:32:49.235: INFO: Waiting up to 5m0s for pod "client-containers-f2f46d2e-b29c-48a7-80cf-9f3c0892c929" in namespace "containers-5346" to be "Succeeded or Failed" Sep 15 10:32:49.276: INFO: Pod "client-containers-f2f46d2e-b29c-48a7-80cf-9f3c0892c929": Phase="Pending", Reason="", readiness=false. Elapsed: 41.425044ms Sep 15 10:32:51.281: INFO: Pod "client-containers-f2f46d2e-b29c-48a7-80cf-9f3c0892c929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045564159s Sep 15 10:32:53.285: INFO: Pod "client-containers-f2f46d2e-b29c-48a7-80cf-9f3c0892c929": Phase="Running", Reason="", readiness=true. Elapsed: 4.050137389s Sep 15 10:32:55.290: INFO: Pod "client-containers-f2f46d2e-b29c-48a7-80cf-9f3c0892c929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054588083s STEP: Saw pod success Sep 15 10:32:55.290: INFO: Pod "client-containers-f2f46d2e-b29c-48a7-80cf-9f3c0892c929" satisfied condition "Succeeded or Failed" Sep 15 10:32:55.293: INFO: Trying to get logs from node kali-worker pod client-containers-f2f46d2e-b29c-48a7-80cf-9f3c0892c929 container test-container: STEP: delete the pod Sep 15 10:32:55.355: INFO: Waiting for pod client-containers-f2f46d2e-b29c-48a7-80cf-9f3c0892c929 to disappear Sep 15 10:32:55.358: INFO: Pod client-containers-f2f46d2e-b29c-48a7-80cf-9f3c0892c929 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:32:55.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5346" for this suite. • [SLOW TEST:6.240 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":9,"skipped":199,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:32:55.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 15 10:32:55.410: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 15 10:32:55.429: INFO: Waiting for terminating namespaces to be deleted... Sep 15 10:32:55.432: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 15 10:32:55.438: INFO: rally-00a9a3e2-1nvlncas-rrklb from c-rally-00a9a3e2-einbow0v started at 2020-09-15 10:32:41 +0000 UTC (1 container statuses recorded) Sep 15 10:32:55.438: INFO: Container rally-00a9a3e2-1nvlncas ready: true, restart count 0 Sep 15 10:32:55.438: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:32:55.438: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 10:32:55.438: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:32:55.438: INFO: Container kube-proxy ready: true, restart count 0 Sep 15 10:32:55.438: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 15 10:32:55.443: INFO: rally-00a9a3e2-1nvlncas-ljln2 from c-rally-00a9a3e2-einbow0v started at 2020-09-15 10:32:41 +0000 UTC (1 container statuses recorded) Sep 15 10:32:55.443: INFO: Container rally-00a9a3e2-1nvlncas ready: true, restart count 0 Sep 15 10:32:55.443: INFO: rally-00a9a3e2-1nvlncas-q9wkq from c-rally-00a9a3e2-einbow0v started at 2020-09-15 10:32:46 +0000 UTC (1 container statuses recorded) Sep 15 10:32:55.443: INFO: Container rally-00a9a3e2-1nvlncas ready: false, restart count 0 Sep 15 10:32:55.443: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:32:55.443: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 10:32:55.443: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:32:55.443: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f725e0a2-0fa1-4bf7-bf37-05e42b03773d 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-f725e0a2-0fa1-4bf7-bf37-05e42b03773d off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f725e0a2-0fa1-4bf7-bf37-05e42b03773d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:33:09.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9138" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.541 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":10,"skipped":207,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:33:09.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3652 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 15 10:33:10.035: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 15 10:33:10.096: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 10:33:12.374: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 10:33:14.100: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 10:33:16.122: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 10:33:18.128: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 10:33:20.140: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 10:33:22.101: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 10:33:24.110: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 10:33:26.100: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 15 10:33:26.106: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 15 10:33:28.402: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 15 10:33:30.111: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 15 10:33:32.111: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 15 10:33:36.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:8080/dial?request=hostname&protocol=udp&host=10.244.1.16&port=8081&tries=1'] Namespace:pod-network-test-3652 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 10:33:36.338: INFO: >>> kubeConfig: /root/.kube/config I0915 10:33:36.374944 7 log.go:181] (0xc0032b4a50) (0xc00218ff40) Create stream I0915 10:33:36.375017 7 log.go:181] (0xc0032b4a50) (0xc00218ff40) Stream added, broadcasting: 1 I0915 10:33:36.378720 7 log.go:181] (0xc0032b4a50) Reply frame received for 1 I0915 10:33:36.378759 7 log.go:181] (0xc0032b4a50) (0xc002ad2320) Create stream I0915 10:33:36.378772 7 log.go:181] (0xc0032b4a50) (0xc002ad2320) Stream added, broadcasting: 3 I0915 10:33:36.379779 7 log.go:181] (0xc0032b4a50) Reply frame received for 3 I0915 10:33:36.379808 7 log.go:181] (0xc0032b4a50) (0xc002ad23c0) Create stream I0915 10:33:36.379827 7 log.go:181] (0xc0032b4a50) (0xc002ad23c0) Stream added, broadcasting: 5 I0915 10:33:36.381111 7 log.go:181] (0xc0032b4a50) Reply frame received for 5 I0915 10:33:36.451584 7 log.go:181] (0xc0032b4a50) Data frame received for 3 I0915 10:33:36.451606 7 log.go:181] (0xc002ad2320) (3) Data frame handling I0915 10:33:36.451618 7 log.go:181] (0xc002ad2320) (3) Data frame sent I0915 10:33:36.452454 7 log.go:181] (0xc0032b4a50) Data frame received for 3 I0915 10:33:36.452496 7 log.go:181] (0xc002ad2320) (3) Data frame handling I0915 10:33:36.452672 7 log.go:181] (0xc0032b4a50) Data frame received for 5 I0915 10:33:36.452684 7 log.go:181] (0xc002ad23c0) (5) Data frame handling I0915 10:33:36.454621 7 log.go:181] (0xc0032b4a50) Data frame received for 1 I0915 10:33:36.454635 7 log.go:181] (0xc00218ff40) (1) Data frame handling I0915 10:33:36.454643 7 log.go:181] (0xc00218ff40) (1) Data frame sent I0915 10:33:36.454655 7 log.go:181] (0xc0032b4a50) (0xc00218ff40) Stream removed, broadcasting: 1 I0915 10:33:36.454707 7 log.go:181] (0xc0032b4a50) Go away received I0915 10:33:36.454771 7 log.go:181] (0xc0032b4a50) (0xc00218ff40) Stream removed, broadcasting: 1 I0915 10:33:36.454817 7 log.go:181] (0xc0032b4a50) (0xc002ad2320) Stream removed, broadcasting: 3 I0915 10:33:36.454846 7 log.go:181] (0xc0032b4a50) (0xc002ad23c0) Stream removed, broadcasting: 5 Sep 15 10:33:36.454: INFO: Waiting for responses: map[] Sep 15 10:33:36.458: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:8080/dial?request=hostname&protocol=udp&host=10.244.2.11&port=8081&tries=1'] Namespace:pod-network-test-3652 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 10:33:36.458: INFO: >>> kubeConfig: /root/.kube/config I0915 10:33:36.486729 7 log.go:181] (0xc002b8e840) (0xc002ad2820) Create stream I0915 10:33:36.486755 7 log.go:181] (0xc002b8e840) (0xc002ad2820) Stream added, broadcasting: 1 I0915 10:33:36.490833 7 log.go:181] (0xc002b8e840) Reply frame received for 1 I0915 10:33:36.490886 7 log.go:181] (0xc002b8e840) (0xc0037f46e0) Create stream I0915 10:33:36.490901 7 log.go:181] (0xc002b8e840) (0xc0037f46e0) Stream added, broadcasting: 3 I0915 10:33:36.491831 7 log.go:181] (0xc002b8e840) Reply frame received for 3 I0915 10:33:36.491893 7 log.go:181] (0xc002b8e840) (0xc002a700a0) Create stream I0915 10:33:36.491907 7 log.go:181] (0xc002b8e840) (0xc002a700a0) Stream added, broadcasting: 5 I0915 10:33:36.492732 7 log.go:181] (0xc002b8e840) Reply frame received for 5 I0915 10:33:36.556685 7 log.go:181] (0xc002b8e840) Data frame received for 3 I0915 10:33:36.556717 7 log.go:181] (0xc0037f46e0) (3) Data frame handling I0915 10:33:36.556736 7 log.go:181] (0xc0037f46e0) (3) Data frame sent I0915 10:33:36.557243 7 log.go:181] (0xc002b8e840) Data frame received for 3 I0915 10:33:36.557265 7 log.go:181] (0xc0037f46e0) (3) Data frame handling I0915 10:33:36.557331 7 log.go:181] (0xc002b8e840) Data frame received for 5 I0915 10:33:36.557348 7 log.go:181] (0xc002a700a0) (5) Data frame handling I0915 10:33:36.559009 7 log.go:181] (0xc002b8e840) Data frame received for 1 I0915 10:33:36.559049 7 log.go:181] (0xc002ad2820) (1) Data frame handling I0915 10:33:36.559102 7 log.go:181] (0xc002ad2820) (1) Data frame sent I0915 10:33:36.559120 7 log.go:181] (0xc002b8e840) (0xc002ad2820) Stream removed, broadcasting: 1 I0915 10:33:36.559132 7 log.go:181] (0xc002b8e840) Go away received I0915 10:33:36.559245 7 log.go:181] (0xc002b8e840) (0xc002ad2820) Stream removed, broadcasting: 1 I0915 10:33:36.559278 7 log.go:181] (0xc002b8e840) (0xc0037f46e0) Stream removed, broadcasting: 3 I0915 10:33:36.559288 7 log.go:181] (0xc002b8e840) (0xc002a700a0) Stream removed, broadcasting: 5 Sep 15 10:33:36.559: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:33:36.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3652" for this suite. • [SLOW TEST:26.662 seconds] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":209,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:33:36.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Sep 15 10:33:36.784: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:33:36.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8657" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":12,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:33:36.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:33:37.400: INFO: Checking APIGroup: apiregistration.k8s.io Sep 15 10:33:37.402: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Sep 15 10:33:37.402: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.402: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Sep 15 10:33:37.402: INFO: Checking APIGroup: extensions Sep 15 10:33:37.402: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Sep 15 10:33:37.402: INFO: Versions found [{extensions/v1beta1 v1beta1}] Sep 15 10:33:37.402: INFO: extensions/v1beta1 matches extensions/v1beta1 Sep 15 10:33:37.402: INFO: Checking APIGroup: apps Sep 15 10:33:37.403: INFO: PreferredVersion.GroupVersion: apps/v1 Sep 15 10:33:37.403: INFO: Versions found [{apps/v1 v1}] Sep 15 10:33:37.403: INFO: apps/v1 matches apps/v1 Sep 15 10:33:37.403: INFO: Checking APIGroup: events.k8s.io Sep 15 10:33:37.404: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Sep 15 10:33:37.404: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.404: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Sep 15 10:33:37.404: INFO: Checking APIGroup: authentication.k8s.io Sep 15 10:33:37.406: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Sep 15 10:33:37.406: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.406: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Sep 15 10:33:37.406: INFO: Checking APIGroup: authorization.k8s.io Sep 15 10:33:37.407: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Sep 15 10:33:37.407: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.407: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Sep 15 10:33:37.407: INFO: Checking APIGroup: autoscaling Sep 15 10:33:37.408: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Sep 15 10:33:37.408: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Sep 15 10:33:37.408: INFO: autoscaling/v1 matches autoscaling/v1 Sep 15 10:33:37.408: INFO: Checking APIGroup: batch Sep 15 10:33:37.409: INFO: PreferredVersion.GroupVersion: batch/v1 Sep 15 10:33:37.409: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Sep 15 10:33:37.409: INFO: batch/v1 matches batch/v1 Sep 15 10:33:37.409: INFO: Checking APIGroup: certificates.k8s.io Sep 15 10:33:37.410: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Sep 15 10:33:37.410: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.410: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Sep 15 10:33:37.410: INFO: Checking APIGroup: networking.k8s.io Sep 15 10:33:37.411: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Sep 15 10:33:37.411: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.411: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Sep 15 10:33:37.411: INFO: Checking APIGroup: policy Sep 15 10:33:37.412: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Sep 15 10:33:37.412: INFO: Versions found [{policy/v1beta1 v1beta1}] Sep 15 10:33:37.412: INFO: policy/v1beta1 matches policy/v1beta1 Sep 15 10:33:37.412: INFO: Checking APIGroup: rbac.authorization.k8s.io Sep 15 10:33:37.413: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Sep 15 10:33:37.413: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.413: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Sep 15 10:33:37.413: INFO: Checking APIGroup: storage.k8s.io Sep 15 10:33:37.414: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Sep 15 10:33:37.414: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.414: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Sep 15 10:33:37.414: INFO: Checking APIGroup: admissionregistration.k8s.io Sep 15 10:33:37.415: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Sep 15 10:33:37.415: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.415: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Sep 15 10:33:37.415: INFO: Checking APIGroup: apiextensions.k8s.io Sep 15 10:33:37.416: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Sep 15 10:33:37.416: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.416: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Sep 15 10:33:37.416: INFO: Checking APIGroup: scheduling.k8s.io Sep 15 10:33:37.417: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Sep 15 10:33:37.417: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.417: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Sep 15 10:33:37.417: INFO: Checking APIGroup: coordination.k8s.io Sep 15 10:33:37.418: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Sep 15 10:33:37.418: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.418: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Sep 15 10:33:37.418: INFO: Checking APIGroup: node.k8s.io Sep 15 10:33:37.418: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Sep 15 10:33:37.418: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.418: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Sep 15 10:33:37.418: INFO: Checking APIGroup: discovery.k8s.io Sep 15 10:33:37.419: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Sep 15 10:33:37.419: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Sep 15 10:33:37.420: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:33:37.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-5371" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":13,"skipped":237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:33:37.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:33:44.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2796" for this suite. • [SLOW TEST:7.253 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":14,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:33:44.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-53b099f8-be03-4236-a259-9973cf9de8c4 STEP: Creating a pod to test consume secrets Sep 15 10:33:44.820: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a754623d-ab40-4c35-ac4c-c6fe915e3e8a" in namespace "projected-1912" to be "Succeeded or Failed" Sep 15 10:33:44.987: INFO: Pod "pod-projected-secrets-a754623d-ab40-4c35-ac4c-c6fe915e3e8a": Phase="Pending", Reason="", readiness=false. Elapsed: 166.986745ms Sep 15 10:33:46.992: INFO: Pod "pod-projected-secrets-a754623d-ab40-4c35-ac4c-c6fe915e3e8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172576074s Sep 15 10:33:48.996: INFO: Pod "pod-projected-secrets-a754623d-ab40-4c35-ac4c-c6fe915e3e8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.176470697s STEP: Saw pod success Sep 15 10:33:48.996: INFO: Pod "pod-projected-secrets-a754623d-ab40-4c35-ac4c-c6fe915e3e8a" satisfied condition "Succeeded or Failed" Sep 15 10:33:48.999: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-a754623d-ab40-4c35-ac4c-c6fe915e3e8a container projected-secret-volume-test: STEP: delete the pod Sep 15 10:33:49.083: INFO: Waiting for pod pod-projected-secrets-a754623d-ab40-4c35-ac4c-c6fe915e3e8a to disappear Sep 15 10:33:49.134: INFO: Pod pod-projected-secrets-a754623d-ab40-4c35-ac4c-c6fe915e3e8a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:33:49.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1912" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":15,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:33:49.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 10:33:49.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d00a5cab-861e-4e88-adc4-c544b19de768" in namespace "downward-api-3267" to be "Succeeded or Failed" Sep 15 10:33:49.284: INFO: Pod "downwardapi-volume-d00a5cab-861e-4e88-adc4-c544b19de768": Phase="Pending", Reason="", readiness=false. Elapsed: 9.625107ms Sep 15 10:33:51.326: INFO: Pod "downwardapi-volume-d00a5cab-861e-4e88-adc4-c544b19de768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051977656s Sep 15 10:33:53.330: INFO: Pod "downwardapi-volume-d00a5cab-861e-4e88-adc4-c544b19de768": Phase="Running", Reason="", readiness=true. Elapsed: 4.056045694s Sep 15 10:33:55.335: INFO: Pod "downwardapi-volume-d00a5cab-861e-4e88-adc4-c544b19de768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060702607s STEP: Saw pod success Sep 15 10:33:55.335: INFO: Pod "downwardapi-volume-d00a5cab-861e-4e88-adc4-c544b19de768" satisfied condition "Succeeded or Failed" Sep 15 10:33:55.338: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d00a5cab-861e-4e88-adc4-c544b19de768 container client-container: STEP: delete the pod Sep 15 10:33:55.381: INFO: Waiting for pod downwardapi-volume-d00a5cab-861e-4e88-adc4-c544b19de768 to disappear Sep 15 10:33:55.397: INFO: Pod downwardapi-volume-d00a5cab-861e-4e88-adc4-c544b19de768 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:33:55.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3267" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":336,"failed":0} [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:33:55.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:33:55.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6704" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":17,"skipped":336,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:33:55.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:33:59.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9624" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":18,"skipped":343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:33:59.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 10:34:00.380: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 10:34:02.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762840, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762840, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762840, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762840, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 10:34:05.428: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Sep 15 10:34:09.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config attach --namespace=webhook-6181 to-be-attached-pod -i -c=container1' Sep 15 10:34:09.625: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:34:09.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6181" for this suite. STEP: Destroying namespace "webhook-6181-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.927 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":19,"skipped":376,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:34:09.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 15 10:34:09.765: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 15 10:34:09.780: INFO: Waiting for terminating namespaces to be deleted... Sep 15 10:34:09.782: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 15 10:34:09.788: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:34:09.788: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 10:34:09.788: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:34:09.788: INFO: Container kube-proxy ready: true, restart count 0 Sep 15 10:34:09.788: INFO: sample-webhook-deployment-cbccbf6bb-l94ph from webhook-6181 started at 2020-09-15 10:34:00 +0000 UTC (1 container statuses recorded) Sep 15 10:34:09.788: INFO: Container sample-webhook ready: true, restart count 0 Sep 15 10:34:09.788: INFO: to-be-attached-pod from webhook-6181 started at 2020-09-15 10:34:05 +0000 UTC (1 container statuses recorded) Sep 15 10:34:09.788: INFO: Container container1 ready: true, restart count 0 Sep 15 10:34:09.788: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 15 10:34:09.793: INFO: rally-329b674e-mdz93rxr from c-rally-329b674e-sa1npj0z started at 2020-09-15 10:34:08 +0000 UTC (1 container statuses recorded) Sep 15 10:34:09.793: INFO: Container rally-329b674e-mdz93rxr ready: false, restart count 0 Sep 15 10:34:09.793: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:34:09.793: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 10:34:09.793: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:34:09.793: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 Sep 15 10:34:09.901: INFO: Pod rally-329b674e-mdz93rxr requesting resource cpu=0m on Node kali-worker2 Sep 15 10:34:09.901: INFO: Pod kindnet-jk7qk requesting resource cpu=100m on Node kali-worker Sep 15 10:34:09.901: INFO: Pod kindnet-r64bh requesting resource cpu=100m on Node kali-worker2 Sep 15 10:34:09.901: INFO: Pod kube-proxy-kz8hk requesting resource cpu=0m on Node kali-worker Sep 15 10:34:09.901: INFO: Pod kube-proxy-rnv9w requesting resource cpu=0m on Node kali-worker2 Sep 15 10:34:09.901: INFO: Pod sample-webhook-deployment-cbccbf6bb-l94ph requesting resource cpu=0m on Node kali-worker Sep 15 10:34:09.901: INFO: Pod to-be-attached-pod requesting resource cpu=0m on Node kali-worker STEP: Starting Pods to consume most of the cluster CPU. Sep 15 10:34:09.901: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker Sep 15 10:34:09.907: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-f55366e1-4667-499b-b52b-a1519b8e31b7.1634ee8b5c89be33], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9067/filler-pod-f55366e1-4667-499b-b52b-a1519b8e31b7 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4f3f8f5-c375-425e-9d85-6449e7396ce2.1634ee8c0a957725], Reason = [Created], Message = [Created container filler-pod-a4f3f8f5-c375-425e-9d85-6449e7396ce2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4f3f8f5-c375-425e-9d85-6449e7396ce2.1634ee8bc4481437], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f55366e1-4667-499b-b52b-a1519b8e31b7.1634ee8dba88bbec], Reason = [Started], Message = [Started container filler-pod-f55366e1-4667-499b-b52b-a1519b8e31b7] STEP: Considering event: Type = [Normal], Name = [filler-pod-f55366e1-4667-499b-b52b-a1519b8e31b7.1634ee8da937e2f8], Reason = [Created], Message = [Created container filler-pod-f55366e1-4667-499b-b52b-a1519b8e31b7] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4f3f8f5-c375-425e-9d85-6449e7396ce2.1634ee8b5acf900e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9067/filler-pod-a4f3f8f5-c375-425e-9d85-6449e7396ce2 to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-f55366e1-4667-499b-b52b-a1519b8e31b7.1634ee8d6badaef2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 6.91308287s] STEP: Considering event: Type = [Normal], Name = [filler-pod-f55366e1-4667-499b-b52b-a1519b8e31b7.1634ee8bcf9fdeaa], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4f3f8f5-c375-425e-9d85-6449e7396ce2.1634ee8c1a5c4aa1], Reason = [Started], Message = [Started container filler-pod-a4f3f8f5-c375-425e-9d85-6449e7396ce2] STEP: Considering event: Type = [Warning], Name = [additional-pod.1634ee8e294961a5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:34:23.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9067" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:13.328 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":20,"skipped":383,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:34:23.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-b7671f33-d07d-453f-b5f0-412124407eea STEP: Creating a pod to test consume secrets Sep 15 10:34:23.172: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59318264-7b47-4caf-8635-95788145b77f" in namespace "projected-8770" to be "Succeeded or Failed" Sep 15 10:34:23.204: INFO: Pod "pod-projected-secrets-59318264-7b47-4caf-8635-95788145b77f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.555196ms Sep 15 10:34:25.232: INFO: Pod "pod-projected-secrets-59318264-7b47-4caf-8635-95788145b77f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059694439s Sep 15 10:34:27.285: INFO: Pod "pod-projected-secrets-59318264-7b47-4caf-8635-95788145b77f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112921683s STEP: Saw pod success Sep 15 10:34:27.285: INFO: Pod "pod-projected-secrets-59318264-7b47-4caf-8635-95788145b77f" satisfied condition "Succeeded or Failed" Sep 15 10:34:27.288: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-59318264-7b47-4caf-8635-95788145b77f container projected-secret-volume-test: STEP: delete the pod Sep 15 10:34:27.346: INFO: Waiting for pod pod-projected-secrets-59318264-7b47-4caf-8635-95788145b77f to disappear Sep 15 10:34:27.352: INFO: Pod pod-projected-secrets-59318264-7b47-4caf-8635-95788145b77f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:34:27.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8770" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":21,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:34:27.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Sep 15 10:34:27.488: INFO: Waiting up to 5m0s for pod "pod-a418fe7a-0c7d-4a3e-8ef4-f77f3745365a" in namespace "emptydir-5799" to be "Succeeded or Failed" Sep 15 10:34:27.513: INFO: Pod "pod-a418fe7a-0c7d-4a3e-8ef4-f77f3745365a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.463002ms Sep 15 10:34:29.716: INFO: Pod "pod-a418fe7a-0c7d-4a3e-8ef4-f77f3745365a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228042783s Sep 15 10:34:31.722: INFO: Pod "pod-a418fe7a-0c7d-4a3e-8ef4-f77f3745365a": Phase="Running", Reason="", readiness=true. Elapsed: 4.233301536s Sep 15 10:34:33.727: INFO: Pod "pod-a418fe7a-0c7d-4a3e-8ef4-f77f3745365a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.238373936s STEP: Saw pod success Sep 15 10:34:33.727: INFO: Pod "pod-a418fe7a-0c7d-4a3e-8ef4-f77f3745365a" satisfied condition "Succeeded or Failed" Sep 15 10:34:33.730: INFO: Trying to get logs from node kali-worker pod pod-a418fe7a-0c7d-4a3e-8ef4-f77f3745365a container test-container: STEP: delete the pod Sep 15 10:34:33.796: INFO: Waiting for pod pod-a418fe7a-0c7d-4a3e-8ef4-f77f3745365a to disappear Sep 15 10:34:33.806: INFO: Pod pod-a418fe7a-0c7d-4a3e-8ef4-f77f3745365a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:34:33.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5799" for this suite. • [SLOW TEST:6.454 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":22,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:34:33.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 15 10:34:33.949: INFO: Waiting up to 5m0s for pod "pod-93b0a6db-9e70-4cc7-9920-53d1e886c2f9" in namespace "emptydir-9245" to be "Succeeded or Failed" Sep 15 10:34:33.951: INFO: Pod "pod-93b0a6db-9e70-4cc7-9920-53d1e886c2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88422ms Sep 15 10:34:35.956: INFO: Pod "pod-93b0a6db-9e70-4cc7-9920-53d1e886c2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006918026s Sep 15 10:34:37.997: INFO: Pod "pod-93b0a6db-9e70-4cc7-9920-53d1e886c2f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048595593s STEP: Saw pod success Sep 15 10:34:37.997: INFO: Pod "pod-93b0a6db-9e70-4cc7-9920-53d1e886c2f9" satisfied condition "Succeeded or Failed" Sep 15 10:34:37.999: INFO: Trying to get logs from node kali-worker pod pod-93b0a6db-9e70-4cc7-9920-53d1e886c2f9 container test-container: STEP: delete the pod Sep 15 10:34:38.035: INFO: Waiting for pod pod-93b0a6db-9e70-4cc7-9920-53d1e886c2f9 to disappear Sep 15 10:34:38.059: INFO: Pod pod-93b0a6db-9e70-4cc7-9920-53d1e886c2f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:34:38.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9245" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":23,"skipped":435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:34:38.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 15 10:34:38.156: INFO: Waiting up to 5m0s for pod "pod-0f804e2f-0b66-484b-840b-2d9886565e21" in namespace "emptydir-9641" to be "Succeeded or Failed" Sep 15 10:34:38.175: INFO: Pod "pod-0f804e2f-0b66-484b-840b-2d9886565e21": Phase="Pending", Reason="", readiness=false. Elapsed: 19.29207ms Sep 15 10:34:40.267: INFO: Pod "pod-0f804e2f-0b66-484b-840b-2d9886565e21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110853428s Sep 15 10:34:42.271: INFO: Pod "pod-0f804e2f-0b66-484b-840b-2d9886565e21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115528738s STEP: Saw pod success Sep 15 10:34:42.271: INFO: Pod "pod-0f804e2f-0b66-484b-840b-2d9886565e21" satisfied condition "Succeeded or Failed" Sep 15 10:34:42.275: INFO: Trying to get logs from node kali-worker2 pod pod-0f804e2f-0b66-484b-840b-2d9886565e21 container test-container: STEP: delete the pod Sep 15 10:34:42.746: INFO: Waiting for pod pod-0f804e2f-0b66-484b-840b-2d9886565e21 to disappear Sep 15 10:34:42.884: INFO: Pod pod-0f804e2f-0b66-484b-840b-2d9886565e21 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:34:42.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9641" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":24,"skipped":467,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:34:42.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 10:34:43.622: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 10:34:45.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762883, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762883, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762883, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762883, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 10:34:47.637: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762883, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762883, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762883, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735762883, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 10:34:50.669: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:34:51.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9879" for this suite. STEP: Destroying namespace "webhook-9879-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.438 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":25,"skipped":471,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:34:51.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Sep 15 10:34:51.426: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:35:13.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1260" for this suite. • [SLOW TEST:21.828 seconds] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":26,"skipped":475,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:35:13.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-b564b7d2-a4dd-4d41-a55e-8f20e37f745c STEP: Creating a pod to test consume secrets Sep 15 10:35:13.423: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c9d26cd3-17d0-47b1-be58-d57879c00ff0" in namespace "projected-2178" to be "Succeeded or Failed" Sep 15 10:35:13.427: INFO: Pod "pod-projected-secrets-c9d26cd3-17d0-47b1-be58-d57879c00ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.933632ms Sep 15 10:35:16.017: INFO: Pod "pod-projected-secrets-c9d26cd3-17d0-47b1-be58-d57879c00ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.593669332s Sep 15 10:35:18.070: INFO: Pod "pod-projected-secrets-c9d26cd3-17d0-47b1-be58-d57879c00ff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.64691657s STEP: Saw pod success Sep 15 10:35:18.070: INFO: Pod "pod-projected-secrets-c9d26cd3-17d0-47b1-be58-d57879c00ff0" satisfied condition "Succeeded or Failed" Sep 15 10:35:18.073: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-c9d26cd3-17d0-47b1-be58-d57879c00ff0 container projected-secret-volume-test: STEP: delete the pod Sep 15 10:35:18.101: INFO: Waiting for pod pod-projected-secrets-c9d26cd3-17d0-47b1-be58-d57879c00ff0 to disappear Sep 15 10:35:18.114: INFO: Pod pod-projected-secrets-c9d26cd3-17d0-47b1-be58-d57879c00ff0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:35:18.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2178" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:35:18.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Sep 15 10:35:18.339: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix296008582/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:35:18.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5473" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":28,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:35:18.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 10:35:18.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f6efc9b-5ceb-4e07-878d-5d39fb3d7ace" in namespace "projected-147" to be "Succeeded or Failed" Sep 15 10:35:18.533: INFO: Pod "downwardapi-volume-8f6efc9b-5ceb-4e07-878d-5d39fb3d7ace": Phase="Pending", Reason="", readiness=false. Elapsed: 33.49607ms Sep 15 10:35:20.539: INFO: Pod "downwardapi-volume-8f6efc9b-5ceb-4e07-878d-5d39fb3d7ace": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038972106s Sep 15 10:35:22.543: INFO: Pod "downwardapi-volume-8f6efc9b-5ceb-4e07-878d-5d39fb3d7ace": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043510511s STEP: Saw pod success Sep 15 10:35:22.543: INFO: Pod "downwardapi-volume-8f6efc9b-5ceb-4e07-878d-5d39fb3d7ace" satisfied condition "Succeeded or Failed" Sep 15 10:35:22.547: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-8f6efc9b-5ceb-4e07-878d-5d39fb3d7ace container client-container: STEP: delete the pod Sep 15 10:35:22.577: INFO: Waiting for pod downwardapi-volume-8f6efc9b-5ceb-4e07-878d-5d39fb3d7ace to disappear Sep 15 10:35:22.599: INFO: Pod downwardapi-volume-8f6efc9b-5ceb-4e07-878d-5d39fb3d7ace no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:35:22.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-147" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":29,"skipped":546,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:35:22.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-94811c81-ec99-45bb-8c8c-ef5481d83c07 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-94811c81-ec99-45bb-8c8c-ef5481d83c07 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:35:28.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3002" for this suite. • [SLOW TEST:6.159 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:35:28.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:35:39.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4585" for this suite. • [SLOW TEST:11.137 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":31,"skipped":602,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:35:39.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 15 10:35:44.491: INFO: Successfully updated pod "labelsupdatef4ec70ee-1d35-473b-9cd3-4cf1aea22b0a" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:35:48.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5499" for this suite. • [SLOW TEST:8.699 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":32,"skipped":615,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:35:48.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:36:48.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3879" for this suite. • [SLOW TEST:60.080 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":33,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:36:48.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 10:36:48.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00cbbfea-51e1-4b2a-9a6c-e4c5022cc24a" in namespace "downward-api-9657" to be "Succeeded or Failed" Sep 15 10:36:48.784: INFO: Pod "downwardapi-volume-00cbbfea-51e1-4b2a-9a6c-e4c5022cc24a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.633948ms Sep 15 10:36:50.856: INFO: Pod "downwardapi-volume-00cbbfea-51e1-4b2a-9a6c-e4c5022cc24a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08995408s Sep 15 10:36:52.951: INFO: Pod "downwardapi-volume-00cbbfea-51e1-4b2a-9a6c-e4c5022cc24a": Phase="Running", Reason="", readiness=true. Elapsed: 4.185595027s Sep 15 10:36:54.957: INFO: Pod "downwardapi-volume-00cbbfea-51e1-4b2a-9a6c-e4c5022cc24a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190931155s STEP: Saw pod success Sep 15 10:36:54.957: INFO: Pod "downwardapi-volume-00cbbfea-51e1-4b2a-9a6c-e4c5022cc24a" satisfied condition "Succeeded or Failed" Sep 15 10:36:54.959: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-00cbbfea-51e1-4b2a-9a6c-e4c5022cc24a container client-container: STEP: delete the pod Sep 15 10:36:55.022: INFO: Waiting for pod downwardapi-volume-00cbbfea-51e1-4b2a-9a6c-e4c5022cc24a to disappear Sep 15 10:36:55.032: INFO: Pod downwardapi-volume-00cbbfea-51e1-4b2a-9a6c-e4c5022cc24a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:36:55.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9657" for this suite. • [SLOW TEST:6.359 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":34,"skipped":644,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:36:55.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:36:56.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-429" for this suite. STEP: Destroying namespace "nspatchtest-54f362b6-b932-4e41-9fb6-fab1de5df954-7325" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":35,"skipped":659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:36:56.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-f7b1d718-3e52-41c8-a1fc-e34d3f4ce505 STEP: Creating a pod to test consume configMaps Sep 15 10:36:56.424: INFO: Waiting up to 5m0s for pod "pod-configmaps-5646a84b-7743-4538-9153-ad3b81799739" in namespace "configmap-2745" to be "Succeeded or Failed" Sep 15 10:36:56.428: INFO: Pod "pod-configmaps-5646a84b-7743-4538-9153-ad3b81799739": Phase="Pending", Reason="", readiness=false. Elapsed: 3.926101ms Sep 15 10:36:58.484: INFO: Pod "pod-configmaps-5646a84b-7743-4538-9153-ad3b81799739": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06001283s Sep 15 10:37:00.488: INFO: Pod "pod-configmaps-5646a84b-7743-4538-9153-ad3b81799739": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063795023s STEP: Saw pod success Sep 15 10:37:00.488: INFO: Pod "pod-configmaps-5646a84b-7743-4538-9153-ad3b81799739" satisfied condition "Succeeded or Failed" Sep 15 10:37:00.491: INFO: Trying to get logs from node kali-worker pod pod-configmaps-5646a84b-7743-4538-9153-ad3b81799739 container configmap-volume-test: STEP: delete the pod Sep 15 10:37:00.545: INFO: Waiting for pod pod-configmaps-5646a84b-7743-4538-9153-ad3b81799739 to disappear Sep 15 10:37:00.554: INFO: Pod pod-configmaps-5646a84b-7743-4538-9153-ad3b81799739 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:37:00.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2745" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":36,"skipped":694,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:37:00.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 10:37:00.611: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e6e4aec-5121-4a9e-94d2-8bd70f5ea8a6" in namespace "downward-api-9596" to be "Succeeded or Failed" Sep 15 10:37:00.614: INFO: Pod "downwardapi-volume-9e6e4aec-5121-4a9e-94d2-8bd70f5ea8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.412725ms Sep 15 10:37:02.652: INFO: Pod "downwardapi-volume-9e6e4aec-5121-4a9e-94d2-8bd70f5ea8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041069599s Sep 15 10:37:04.656: INFO: Pod "downwardapi-volume-9e6e4aec-5121-4a9e-94d2-8bd70f5ea8a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045647226s STEP: Saw pod success Sep 15 10:37:04.657: INFO: Pod "downwardapi-volume-9e6e4aec-5121-4a9e-94d2-8bd70f5ea8a6" satisfied condition "Succeeded or Failed" Sep 15 10:37:04.660: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-9e6e4aec-5121-4a9e-94d2-8bd70f5ea8a6 container client-container: STEP: delete the pod Sep 15 10:37:04.690: INFO: Waiting for pod downwardapi-volume-9e6e4aec-5121-4a9e-94d2-8bd70f5ea8a6 to disappear Sep 15 10:37:04.729: INFO: Pod downwardapi-volume-9e6e4aec-5121-4a9e-94d2-8bd70f5ea8a6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:37:04.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9596" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":37,"skipped":697,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:37:04.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-f622d6af-1244-4ddb-939a-88f7d4f822f1 STEP: Creating a pod to test consume secrets Sep 15 10:37:04.904: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2c18c03-38f9-4db9-844d-05336e97268c" in namespace "projected-6200" to be "Succeeded or Failed" Sep 15 10:37:04.913: INFO: Pod "pod-projected-secrets-c2c18c03-38f9-4db9-844d-05336e97268c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.052756ms Sep 15 10:37:06.969: INFO: Pod "pod-projected-secrets-c2c18c03-38f9-4db9-844d-05336e97268c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065033578s Sep 15 10:37:09.089: INFO: Pod "pod-projected-secrets-c2c18c03-38f9-4db9-844d-05336e97268c": Phase="Running", Reason="", readiness=true. Elapsed: 4.184574448s Sep 15 10:37:11.093: INFO: Pod "pod-projected-secrets-c2c18c03-38f9-4db9-844d-05336e97268c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18831461s STEP: Saw pod success Sep 15 10:37:11.093: INFO: Pod "pod-projected-secrets-c2c18c03-38f9-4db9-844d-05336e97268c" satisfied condition "Succeeded or Failed" Sep 15 10:37:11.095: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-c2c18c03-38f9-4db9-844d-05336e97268c container projected-secret-volume-test: STEP: delete the pod Sep 15 10:37:11.123: INFO: Waiting for pod pod-projected-secrets-c2c18c03-38f9-4db9-844d-05336e97268c to disappear Sep 15 10:37:11.129: INFO: Pod pod-projected-secrets-c2c18c03-38f9-4db9-844d-05336e97268c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:37:11.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6200" for this suite. • [SLOW TEST:6.407 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":38,"skipped":700,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:37:11.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 15 10:37:15.753: INFO: Successfully updated pod "labelsupdatec3478c8c-b817-4d1e-be1c-b5d389d57739" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:37:17.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6977" for this suite. • [SLOW TEST:6.717 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":39,"skipped":708,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:37:17.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9826b4d8-0abc-4f1e-87ca-7e77683f6890 STEP: Creating a pod to test consume configMaps Sep 15 10:37:17.935: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-36c829fe-13b2-4f84-8a2c-011e2e78cfeb" in namespace "projected-9048" to be "Succeeded or Failed" Sep 15 10:37:17.945: INFO: Pod "pod-projected-configmaps-36c829fe-13b2-4f84-8a2c-011e2e78cfeb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.747998ms Sep 15 10:37:20.077: INFO: Pod "pod-projected-configmaps-36c829fe-13b2-4f84-8a2c-011e2e78cfeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142464001s Sep 15 10:37:22.172: INFO: Pod "pod-projected-configmaps-36c829fe-13b2-4f84-8a2c-011e2e78cfeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.236820254s STEP: Saw pod success Sep 15 10:37:22.172: INFO: Pod "pod-projected-configmaps-36c829fe-13b2-4f84-8a2c-011e2e78cfeb" satisfied condition "Succeeded or Failed" Sep 15 10:37:22.174: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-36c829fe-13b2-4f84-8a2c-011e2e78cfeb container projected-configmap-volume-test: STEP: delete the pod Sep 15 10:37:22.217: INFO: Waiting for pod pod-projected-configmaps-36c829fe-13b2-4f84-8a2c-011e2e78cfeb to disappear Sep 15 10:37:22.298: INFO: Pod pod-projected-configmaps-36c829fe-13b2-4f84-8a2c-011e2e78cfeb no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:37:22.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9048" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":713,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:37:22.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-734efafa-a122-44d2-9eb5-d1c9f32214b2 STEP: Creating a pod to test consume secrets Sep 15 10:37:22.460: INFO: Waiting up to 5m0s for pod "pod-secrets-a557ee73-fd20-4fdb-9928-e850fdec4e9f" in namespace "secrets-5089" to be "Succeeded or Failed" Sep 15 10:37:22.477: INFO: Pod "pod-secrets-a557ee73-fd20-4fdb-9928-e850fdec4e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.291185ms Sep 15 10:37:24.480: INFO: Pod "pod-secrets-a557ee73-fd20-4fdb-9928-e850fdec4e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019600963s Sep 15 10:37:26.484: INFO: Pod "pod-secrets-a557ee73-fd20-4fdb-9928-e850fdec4e9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023942256s STEP: Saw pod success Sep 15 10:37:26.484: INFO: Pod "pod-secrets-a557ee73-fd20-4fdb-9928-e850fdec4e9f" satisfied condition "Succeeded or Failed" Sep 15 10:37:26.487: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-a557ee73-fd20-4fdb-9928-e850fdec4e9f container secret-volume-test: STEP: delete the pod Sep 15 10:37:26.578: INFO: Waiting for pod pod-secrets-a557ee73-fd20-4fdb-9928-e850fdec4e9f to disappear Sep 15 10:37:26.701: INFO: Pod pod-secrets-a557ee73-fd20-4fdb-9928-e850fdec4e9f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:37:26.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5089" for this suite. STEP: Destroying namespace "secret-namespace-4625" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":751,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:37:26.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:37:26.782: INFO: Pod name rollover-pod: Found 0 pods out of 1 Sep 15 10:37:31.785: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 15 10:37:39.791: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 15 10:37:41.799: INFO: Creating deployment "test-rollover-deployment" Sep 15 10:37:41.831: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 15 10:37:43.846: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 15 10:37:43.852: INFO: Ensure that both replica sets have 1 created replica Sep 15 10:37:43.857: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 15 10:37:43.865: INFO: Updating deployment test-rollover-deployment Sep 15 10:37:43.865: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 15 10:37:45.877: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 15 10:37:45.885: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 15 10:37:45.891: INFO: all replica sets need to contain the pod-template-hash label Sep 15 10:37:45.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763064, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 10:37:47.901: INFO: all replica sets need to contain the pod-template-hash label Sep 15 10:37:47.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763064, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 10:37:49.900: INFO: all replica sets need to contain the pod-template-hash label Sep 15 10:37:49.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763068, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 10:37:51.899: INFO: all replica sets need to contain the pod-template-hash label Sep 15 10:37:51.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763068, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 10:37:53.899: INFO: all replica sets need to contain the pod-template-hash label Sep 15 10:37:53.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763068, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 10:37:55.900: INFO: all replica sets need to contain the pod-template-hash label Sep 15 10:37:55.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763068, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 10:37:57.900: INFO: all replica sets need to contain the pod-template-hash label Sep 15 10:37:57.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763068, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763061, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 10:37:59.899: INFO: Sep 15 10:37:59.899: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 15 10:37:59.906: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3315 /apis/apps/v1/namespaces/deployment-3315/deployments/test-rollover-deployment 27ebcb7e-4279-4189-bb69-6edfcded595c 431346 2 2020-09-15 10:37:41 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-15 10:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-15 10:37:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0056a3508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-15 10:37:41 +0000 UTC,LastTransitionTime:2020-09-15 10:37:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-09-15 10:37:58 +0000 UTC,LastTransitionTime:2020-09-15 10:37:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 15 10:37:59.910: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-3315 /apis/apps/v1/namespaces/deployment-3315/replicasets/test-rollover-deployment-5797c7764 4874a6a0-7c2a-4599-926c-e6a7687e12ee 431335 2 2020-09-15 10:37:43 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 27ebcb7e-4279-4189-bb69-6edfcded595c 0xc0056f56a0 0xc0056f56a1}] [] [{kube-controller-manager Update apps/v1 2020-09-15 10:37:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27ebcb7e-4279-4189-bb69-6edfcded595c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0056f5718 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 15 10:37:59.910: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 15 10:37:59.910: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3315 /apis/apps/v1/namespaces/deployment-3315/replicasets/test-rollover-controller 199f80a0-3ba8-44d7-b2bc-d79c842dbc0b 431345 2 2020-09-15 10:37:26 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 27ebcb7e-4279-4189-bb69-6edfcded595c 0xc0056f5597 0xc0056f5598}] [] [{e2e.test Update apps/v1 2020-09-15 10:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-15 10:37:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27ebcb7e-4279-4189-bb69-6edfcded595c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0056f5638 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 15 10:37:59.910: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-3315 /apis/apps/v1/namespaces/deployment-3315/replicasets/test-rollover-deployment-78bc8b888c 943c994f-f8ee-4167-989f-6e3536b36cad 431250 2 2020-09-15 10:37:41 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 27ebcb7e-4279-4189-bb69-6edfcded595c 0xc0056f57a7 0xc0056f57a8}] [] [{kube-controller-manager Update apps/v1 2020-09-15 10:37:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27ebcb7e-4279-4189-bb69-6edfcded595c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0056f5848 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 15 10:37:59.914: INFO: Pod "test-rollover-deployment-5797c7764-v2zrm" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-v2zrm test-rollover-deployment-5797c7764- deployment-3315 /api/v1/namespaces/deployment-3315/pods/test-rollover-deployment-5797c7764-v2zrm e4f9e89f-f02b-467b-851e-9f31432f2aff 431273 0 2020-09-15 10:37:44 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 4874a6a0-7c2a-4599-926c-e6a7687e12ee 0xc0056f5f20 0xc0056f5f21}] [] [{kube-controller-manager Update v1 2020-09-15 10:37:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4874a6a0-7c2a-4599-926c-e6a7687e12ee\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:37:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xql52,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xql52,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xql52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:37:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:37:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:37:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:37:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.34,StartTime:2020-09-15 10:37:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 10:37:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://212320a2f3e7f5e2c71f1eb54cf33d3d09da036453cc02e4ed7c9db423ba0aec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:37:59.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3315" for this suite. • [SLOW TEST:33.204 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":42,"skipped":775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:37:59.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 15 10:38:00.021: INFO: Waiting up to 5m0s for pod "downward-api-5e977d1b-c26f-494e-a632-d215a6fadde1" in namespace "downward-api-5355" to be "Succeeded or Failed" Sep 15 10:38:00.036: INFO: Pod "downward-api-5e977d1b-c26f-494e-a632-d215a6fadde1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.153188ms Sep 15 10:38:02.039: INFO: Pod "downward-api-5e977d1b-c26f-494e-a632-d215a6fadde1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018664814s Sep 15 10:38:04.044: INFO: Pod "downward-api-5e977d1b-c26f-494e-a632-d215a6fadde1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022968236s STEP: Saw pod success Sep 15 10:38:04.044: INFO: Pod "downward-api-5e977d1b-c26f-494e-a632-d215a6fadde1" satisfied condition "Succeeded or Failed" Sep 15 10:38:04.046: INFO: Trying to get logs from node kali-worker2 pod downward-api-5e977d1b-c26f-494e-a632-d215a6fadde1 container dapi-container: STEP: delete the pod Sep 15 10:38:04.082: INFO: Waiting for pod downward-api-5e977d1b-c26f-494e-a632-d215a6fadde1 to disappear Sep 15 10:38:04.113: INFO: Pod downward-api-5e977d1b-c26f-494e-a632-d215a6fadde1 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:38:04.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5355" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":43,"skipped":809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:38:04.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:38:04.195: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4788 I0915 10:38:04.213383 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4788, replica count: 1 I0915 10:38:05.263719 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 10:38:06.263932 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 10:38:07.264207 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 10:38:08.264413 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 10:38:08.413: INFO: Created: latency-svc-kwx2z Sep 15 10:38:08.421: INFO: Got endpoints: latency-svc-kwx2z [56.625949ms] Sep 15 10:38:08.490: INFO: Created: latency-svc-gqkv8 Sep 15 10:38:08.504: INFO: Got endpoints: latency-svc-gqkv8 [83.349282ms] Sep 15 10:38:08.551: INFO: Created: latency-svc-4lbwr Sep 15 10:38:08.558: INFO: Got endpoints: latency-svc-4lbwr [137.135504ms] Sep 15 10:38:08.577: INFO: Created: latency-svc-6rgrr Sep 15 10:38:08.594: INFO: Got endpoints: latency-svc-6rgrr [173.212257ms] Sep 15 10:38:08.613: INFO: Created: latency-svc-k5r5s Sep 15 10:38:08.644: INFO: Got endpoints: latency-svc-k5r5s [222.950195ms] Sep 15 10:38:08.693: INFO: Created: latency-svc-d5jwm Sep 15 10:38:08.709: INFO: Got endpoints: latency-svc-d5jwm [287.492924ms] Sep 15 10:38:08.744: INFO: Created: latency-svc-vq92n Sep 15 10:38:08.751: INFO: Got endpoints: latency-svc-vq92n [329.942579ms] Sep 15 10:38:08.845: INFO: Created: latency-svc-5zms6 Sep 15 10:38:08.859: INFO: Got endpoints: latency-svc-5zms6 [437.780889ms] Sep 15 10:38:08.895: INFO: Created: latency-svc-24dx8 Sep 15 10:38:08.913: INFO: Got endpoints: latency-svc-24dx8 [492.308267ms] Sep 15 10:38:08.937: INFO: Created: latency-svc-bd49j Sep 15 10:38:09.007: INFO: Got endpoints: latency-svc-bd49j [585.365355ms] Sep 15 10:38:09.011: INFO: Created: latency-svc-6xf27 Sep 15 10:38:09.015: INFO: Got endpoints: latency-svc-6xf27 [594.167242ms] Sep 15 10:38:09.075: INFO: Created: latency-svc-qw2st Sep 15 10:38:09.094: INFO: Got endpoints: latency-svc-qw2st [672.474447ms] Sep 15 10:38:09.201: INFO: Created: latency-svc-lj4ht Sep 15 10:38:09.208: INFO: Got endpoints: latency-svc-lj4ht [786.399265ms] Sep 15 10:38:09.267: INFO: Created: latency-svc-l8lf4 Sep 15 10:38:09.286: INFO: Got endpoints: latency-svc-l8lf4 [864.719027ms] Sep 15 10:38:09.384: INFO: Created: latency-svc-b4k4z Sep 15 10:38:09.399: INFO: Got endpoints: latency-svc-b4k4z [978.109972ms] Sep 15 10:38:09.420: INFO: Created: latency-svc-f25bs Sep 15 10:38:09.449: INFO: Got endpoints: latency-svc-f25bs [1.028048765s] Sep 15 10:38:09.479: INFO: Created: latency-svc-w6pxt Sep 15 10:38:09.545: INFO: Got endpoints: latency-svc-w6pxt [1.040816713s] Sep 15 10:38:09.578: INFO: Created: latency-svc-bp9j4 Sep 15 10:38:09.592: INFO: Got endpoints: latency-svc-bp9j4 [1.034225894s] Sep 15 10:38:09.621: INFO: Created: latency-svc-q2wj2 Sep 15 10:38:09.634: INFO: Got endpoints: latency-svc-q2wj2 [1.039958238s] Sep 15 10:38:09.696: INFO: Created: latency-svc-jj924 Sep 15 10:38:09.701: INFO: Got endpoints: latency-svc-jj924 [1.056997487s] Sep 15 10:38:09.731: INFO: Created: latency-svc-bwr6b Sep 15 10:38:09.761: INFO: Got endpoints: latency-svc-bwr6b [1.051911751s] Sep 15 10:38:09.794: INFO: Created: latency-svc-jtfnk Sep 15 10:38:09.850: INFO: Got endpoints: latency-svc-jtfnk [1.099048818s] Sep 15 10:38:09.879: INFO: Created: latency-svc-bmdch Sep 15 10:38:09.887: INFO: Got endpoints: latency-svc-bmdch [1.028065765s] Sep 15 10:38:09.941: INFO: Created: latency-svc-x78d6 Sep 15 10:38:10.000: INFO: Got endpoints: latency-svc-x78d6 [1.086673792s] Sep 15 10:38:10.029: INFO: Created: latency-svc-spl74 Sep 15 10:38:10.045: INFO: Got endpoints: latency-svc-spl74 [1.037966946s] Sep 15 10:38:10.095: INFO: Created: latency-svc-2sr7c Sep 15 10:38:10.174: INFO: Got endpoints: latency-svc-2sr7c [1.159033875s] Sep 15 10:38:10.178: INFO: Created: latency-svc-6z2md Sep 15 10:38:10.223: INFO: Created: latency-svc-t5cng Sep 15 10:38:10.223: INFO: Got endpoints: latency-svc-6z2md [1.128940456s] Sep 15 10:38:10.252: INFO: Got endpoints: latency-svc-t5cng [1.044556333s] Sep 15 10:38:10.336: INFO: Created: latency-svc-jc5mf Sep 15 10:38:10.339: INFO: Got endpoints: latency-svc-jc5mf [1.053317305s] Sep 15 10:38:10.364: INFO: Created: latency-svc-8m5hz Sep 15 10:38:10.397: INFO: Got endpoints: latency-svc-8m5hz [997.653297ms] Sep 15 10:38:10.426: INFO: Created: latency-svc-sqhbs Sep 15 10:38:10.503: INFO: Got endpoints: latency-svc-sqhbs [1.054207428s] Sep 15 10:38:10.505: INFO: Created: latency-svc-hbspg Sep 15 10:38:10.513: INFO: Got endpoints: latency-svc-hbspg [968.038828ms] Sep 15 10:38:10.544: INFO: Created: latency-svc-6jmgx Sep 15 10:38:10.562: INFO: Got endpoints: latency-svc-6jmgx [969.543346ms] Sep 15 10:38:10.586: INFO: Created: latency-svc-hx9l6 Sep 15 10:38:10.659: INFO: Got endpoints: latency-svc-hx9l6 [1.024587466s] Sep 15 10:38:10.662: INFO: Created: latency-svc-d2stz Sep 15 10:38:10.670: INFO: Got endpoints: latency-svc-d2stz [969.064792ms] Sep 15 10:38:10.694: INFO: Created: latency-svc-25lmp Sep 15 10:38:10.713: INFO: Got endpoints: latency-svc-25lmp [951.763079ms] Sep 15 10:38:10.750: INFO: Created: latency-svc-z8wgh Sep 15 10:38:10.838: INFO: Got endpoints: latency-svc-z8wgh [987.963165ms] Sep 15 10:38:10.840: INFO: Created: latency-svc-xs8kt Sep 15 10:38:10.874: INFO: Got endpoints: latency-svc-xs8kt [987.074515ms] Sep 15 10:38:10.916: INFO: Created: latency-svc-nlzch Sep 15 10:38:10.935: INFO: Got endpoints: latency-svc-nlzch [934.832697ms] Sep 15 10:38:10.988: INFO: Created: latency-svc-kpwh5 Sep 15 10:38:11.014: INFO: Got endpoints: latency-svc-kpwh5 [969.331367ms] Sep 15 10:38:11.015: INFO: Created: latency-svc-pt7lv Sep 15 10:38:11.044: INFO: Got endpoints: latency-svc-pt7lv [869.987234ms] Sep 15 10:38:11.078: INFO: Created: latency-svc-qmlnv Sep 15 10:38:11.145: INFO: Got endpoints: latency-svc-qmlnv [921.963357ms] Sep 15 10:38:11.177: INFO: Created: latency-svc-29pjd Sep 15 10:38:11.194: INFO: Got endpoints: latency-svc-29pjd [941.626073ms] Sep 15 10:38:11.324: INFO: Created: latency-svc-p77nh Sep 15 10:38:11.332: INFO: Got endpoints: latency-svc-p77nh [992.288257ms] Sep 15 10:38:11.354: INFO: Created: latency-svc-vl82w Sep 15 10:38:11.368: INFO: Got endpoints: latency-svc-vl82w [970.953076ms] Sep 15 10:38:11.492: INFO: Created: latency-svc-2qrxm Sep 15 10:38:11.518: INFO: Got endpoints: latency-svc-2qrxm [1.014985677s] Sep 15 10:38:11.552: INFO: Created: latency-svc-gtsxh Sep 15 10:38:11.567: INFO: Got endpoints: latency-svc-gtsxh [1.053400774s] Sep 15 10:38:11.588: INFO: Created: latency-svc-wltzb Sep 15 10:38:11.665: INFO: Got endpoints: latency-svc-wltzb [1.103352861s] Sep 15 10:38:11.668: INFO: Created: latency-svc-czchw Sep 15 10:38:11.676: INFO: Got endpoints: latency-svc-czchw [1.017529535s] Sep 15 10:38:11.716: INFO: Created: latency-svc-cv5j6 Sep 15 10:38:11.735: INFO: Got endpoints: latency-svc-cv5j6 [1.065170739s] Sep 15 10:38:11.821: INFO: Created: latency-svc-9zhtj Sep 15 10:38:11.825: INFO: Got endpoints: latency-svc-9zhtj [1.112117045s] Sep 15 10:38:11.858: INFO: Created: latency-svc-lxx9z Sep 15 10:38:11.874: INFO: Got endpoints: latency-svc-lxx9z [1.035376936s] Sep 15 10:38:11.894: INFO: Created: latency-svc-f8xst Sep 15 10:38:11.982: INFO: Got endpoints: latency-svc-f8xst [1.107108151s] Sep 15 10:38:11.982: INFO: Created: latency-svc-lk4t8 Sep 15 10:38:11.988: INFO: Got endpoints: latency-svc-lk4t8 [1.052476397s] Sep 15 10:38:12.038: INFO: Created: latency-svc-jzjnn Sep 15 10:38:12.049: INFO: Got endpoints: latency-svc-jzjnn [1.034380158s] Sep 15 10:38:12.138: INFO: Created: latency-svc-9x5wr Sep 15 10:38:12.148: INFO: Got endpoints: latency-svc-9x5wr [1.103256298s] Sep 15 10:38:12.172: INFO: Created: latency-svc-lqttz Sep 15 10:38:12.214: INFO: Got endpoints: latency-svc-lqttz [1.068871247s] Sep 15 10:38:12.318: INFO: Created: latency-svc-g2wgv Sep 15 10:38:12.358: INFO: Got endpoints: latency-svc-g2wgv [1.16420011s] Sep 15 10:38:12.388: INFO: Created: latency-svc-7rspb Sep 15 10:38:12.503: INFO: Got endpoints: latency-svc-7rspb [1.171522402s] Sep 15 10:38:12.506: INFO: Created: latency-svc-fwttw Sep 15 10:38:12.512: INFO: Got endpoints: latency-svc-fwttw [1.143805611s] Sep 15 10:38:12.572: INFO: Created: latency-svc-pxqhg Sep 15 10:38:12.589: INFO: Got endpoints: latency-svc-pxqhg [1.07075455s] Sep 15 10:38:12.671: INFO: Created: latency-svc-zj45j Sep 15 10:38:12.706: INFO: Got endpoints: latency-svc-zj45j [1.139091539s] Sep 15 10:38:12.743: INFO: Created: latency-svc-vpjfx Sep 15 10:38:12.770: INFO: Got endpoints: latency-svc-vpjfx [1.104722794s] Sep 15 10:38:12.847: INFO: Created: latency-svc-b9pk6 Sep 15 10:38:12.853: INFO: Got endpoints: latency-svc-b9pk6 [1.176885911s] Sep 15 10:38:12.885: INFO: Created: latency-svc-dntb4 Sep 15 10:38:12.908: INFO: Got endpoints: latency-svc-dntb4 [1.173094362s] Sep 15 10:38:12.928: INFO: Created: latency-svc-85992 Sep 15 10:38:13.012: INFO: Got endpoints: latency-svc-85992 [1.187154056s] Sep 15 10:38:13.015: INFO: Created: latency-svc-tvzjr Sep 15 10:38:13.022: INFO: Got endpoints: latency-svc-tvzjr [1.148434434s] Sep 15 10:38:13.046: INFO: Created: latency-svc-pf8pf Sep 15 10:38:13.059: INFO: Got endpoints: latency-svc-pf8pf [1.0769655s] Sep 15 10:38:13.088: INFO: Created: latency-svc-9pb8z Sep 15 10:38:13.186: INFO: Got endpoints: latency-svc-9pb8z [1.198187785s] Sep 15 10:38:13.189: INFO: Created: latency-svc-9nnkm Sep 15 10:38:13.215: INFO: Got endpoints: latency-svc-9nnkm [1.166827811s] Sep 15 10:38:13.237: INFO: Created: latency-svc-h87k2 Sep 15 10:38:13.257: INFO: Got endpoints: latency-svc-h87k2 [1.109586419s] Sep 15 10:38:13.286: INFO: Created: latency-svc-csmrz Sep 15 10:38:13.354: INFO: Got endpoints: latency-svc-csmrz [1.140029309s] Sep 15 10:38:13.372: INFO: Created: latency-svc-cnmqt Sep 15 10:38:13.384: INFO: Got endpoints: latency-svc-cnmqt [1.025807797s] Sep 15 10:38:13.409: INFO: Created: latency-svc-ccdsn Sep 15 10:38:13.420: INFO: Got endpoints: latency-svc-ccdsn [916.667449ms] Sep 15 10:38:13.446: INFO: Created: latency-svc-h8mcf Sep 15 10:38:13.545: INFO: Got endpoints: latency-svc-h8mcf [1.033058556s] Sep 15 10:38:13.547: INFO: Created: latency-svc-hpbnd Sep 15 10:38:13.558: INFO: Got endpoints: latency-svc-hpbnd [969.08336ms] Sep 15 10:38:13.589: INFO: Created: latency-svc-5tk7s Sep 15 10:38:13.603: INFO: Got endpoints: latency-svc-5tk7s [896.702624ms] Sep 15 10:38:13.624: INFO: Created: latency-svc-t9tfp Sep 15 10:38:13.725: INFO: Got endpoints: latency-svc-t9tfp [954.659674ms] Sep 15 10:38:13.784: INFO: Created: latency-svc-d4cm4 Sep 15 10:38:13.800: INFO: Got endpoints: latency-svc-d4cm4 [946.17905ms] Sep 15 10:38:13.892: INFO: Created: latency-svc-9tgwz Sep 15 10:38:13.897: INFO: Got endpoints: latency-svc-9tgwz [988.271933ms] Sep 15 10:38:13.929: INFO: Created: latency-svc-dzd9b Sep 15 10:38:13.938: INFO: Got endpoints: latency-svc-dzd9b [925.398382ms] Sep 15 10:38:13.960: INFO: Created: latency-svc-xnq5l Sep 15 10:38:13.968: INFO: Got endpoints: latency-svc-xnq5l [945.651731ms] Sep 15 10:38:13.991: INFO: Created: latency-svc-d9xqt Sep 15 10:38:14.048: INFO: Got endpoints: latency-svc-d9xqt [989.849555ms] Sep 15 10:38:14.052: INFO: Created: latency-svc-529qd Sep 15 10:38:14.058: INFO: Got endpoints: latency-svc-529qd [872.318656ms] Sep 15 10:38:14.083: INFO: Created: latency-svc-8qlpx Sep 15 10:38:14.100: INFO: Got endpoints: latency-svc-8qlpx [884.793534ms] Sep 15 10:38:14.121: INFO: Created: latency-svc-xrtb4 Sep 15 10:38:14.137: INFO: Got endpoints: latency-svc-xrtb4 [879.62298ms] Sep 15 10:38:14.228: INFO: Created: latency-svc-pb85s Sep 15 10:38:14.239: INFO: Got endpoints: latency-svc-pb85s [885.179447ms] Sep 15 10:38:14.263: INFO: Created: latency-svc-kjbzd Sep 15 10:38:14.281: INFO: Got endpoints: latency-svc-kjbzd [897.349801ms] Sep 15 10:38:14.311: INFO: Created: latency-svc-k7nmb Sep 15 10:38:14.407: INFO: Got endpoints: latency-svc-k7nmb [987.157832ms] Sep 15 10:38:14.421: INFO: Created: latency-svc-jgkhj Sep 15 10:38:14.439: INFO: Got endpoints: latency-svc-jgkhj [893.593671ms] Sep 15 10:38:14.493: INFO: Created: latency-svc-zc8gp Sep 15 10:38:14.594: INFO: Got endpoints: latency-svc-zc8gp [1.035133971s] Sep 15 10:38:14.596: INFO: Created: latency-svc-qmc55 Sep 15 10:38:14.613: INFO: Got endpoints: latency-svc-qmc55 [1.010283496s] Sep 15 10:38:14.649: INFO: Created: latency-svc-cgrlh Sep 15 10:38:14.666: INFO: Got endpoints: latency-svc-cgrlh [940.971849ms] Sep 15 10:38:14.686: INFO: Created: latency-svc-nlwrf Sep 15 10:38:14.746: INFO: Got endpoints: latency-svc-nlwrf [946.475846ms] Sep 15 10:38:14.749: INFO: Created: latency-svc-shcnb Sep 15 10:38:14.757: INFO: Got endpoints: latency-svc-shcnb [860.381148ms] Sep 15 10:38:14.779: INFO: Created: latency-svc-bmmdr Sep 15 10:38:14.799: INFO: Got endpoints: latency-svc-bmmdr [861.426917ms] Sep 15 10:38:14.827: INFO: Created: latency-svc-knk99 Sep 15 10:38:14.916: INFO: Got endpoints: latency-svc-knk99 [948.344076ms] Sep 15 10:38:14.919: INFO: Created: latency-svc-2x8bw Sep 15 10:38:14.926: INFO: Got endpoints: latency-svc-2x8bw [876.989054ms] Sep 15 10:38:14.955: INFO: Created: latency-svc-xzqmk Sep 15 10:38:14.968: INFO: Got endpoints: latency-svc-xzqmk [909.661293ms] Sep 15 10:38:14.991: INFO: Created: latency-svc-q8rnf Sep 15 10:38:15.004: INFO: Got endpoints: latency-svc-q8rnf [903.912935ms] Sep 15 10:38:15.102: INFO: Created: latency-svc-8w78l Sep 15 10:38:15.112: INFO: Got endpoints: latency-svc-8w78l [974.846806ms] Sep 15 10:38:15.133: INFO: Created: latency-svc-7zv4w Sep 15 10:38:15.161: INFO: Got endpoints: latency-svc-7zv4w [921.672335ms] Sep 15 10:38:15.270: INFO: Created: latency-svc-bt62s Sep 15 10:38:15.275: INFO: Got endpoints: latency-svc-bt62s [993.85572ms] Sep 15 10:38:15.301: INFO: Created: latency-svc-fglmw Sep 15 10:38:15.317: INFO: Got endpoints: latency-svc-fglmw [909.738971ms] Sep 15 10:38:15.337: INFO: Created: latency-svc-59bzm Sep 15 10:38:15.367: INFO: Got endpoints: latency-svc-59bzm [928.013395ms] Sep 15 10:38:15.425: INFO: Created: latency-svc-gqzlh Sep 15 10:38:15.429: INFO: Got endpoints: latency-svc-gqzlh [835.714064ms] Sep 15 10:38:15.483: INFO: Created: latency-svc-6vzgw Sep 15 10:38:15.510: INFO: Created: latency-svc-tpl76 Sep 15 10:38:15.511: INFO: Got endpoints: latency-svc-6vzgw [897.484322ms] Sep 15 10:38:15.575: INFO: Got endpoints: latency-svc-tpl76 [908.991792ms] Sep 15 10:38:15.595: INFO: Created: latency-svc-jzpqs Sep 15 10:38:15.612: INFO: Got endpoints: latency-svc-jzpqs [865.836824ms] Sep 15 10:38:15.633: INFO: Created: latency-svc-rdqnb Sep 15 10:38:15.649: INFO: Got endpoints: latency-svc-rdqnb [891.998726ms] Sep 15 10:38:15.674: INFO: Created: latency-svc-cq5cw Sep 15 10:38:15.743: INFO: Got endpoints: latency-svc-cq5cw [943.676632ms] Sep 15 10:38:15.756: INFO: Created: latency-svc-qjtrb Sep 15 10:38:15.775: INFO: Got endpoints: latency-svc-qjtrb [858.779161ms] Sep 15 10:38:15.823: INFO: Created: latency-svc-kz5hm Sep 15 10:38:15.841: INFO: Got endpoints: latency-svc-kz5hm [915.533671ms] Sep 15 10:38:15.886: INFO: Created: latency-svc-9jskw Sep 15 10:38:15.895: INFO: Got endpoints: latency-svc-9jskw [927.346671ms] Sep 15 10:38:15.915: INFO: Created: latency-svc-vpz74 Sep 15 10:38:15.944: INFO: Got endpoints: latency-svc-vpz74 [940.099019ms] Sep 15 10:38:15.981: INFO: Created: latency-svc-ng6v5 Sep 15 10:38:16.036: INFO: Got endpoints: latency-svc-ng6v5 [924.168252ms] Sep 15 10:38:16.057: INFO: Created: latency-svc-mpw6q Sep 15 10:38:16.065: INFO: Got endpoints: latency-svc-mpw6q [903.959867ms] Sep 15 10:38:16.089: INFO: Created: latency-svc-zbxjw Sep 15 10:38:16.106: INFO: Got endpoints: latency-svc-zbxjw [830.698659ms] Sep 15 10:38:16.131: INFO: Created: latency-svc-59l8g Sep 15 10:38:16.186: INFO: Got endpoints: latency-svc-59l8g [868.439791ms] Sep 15 10:38:16.212: INFO: Created: latency-svc-pnjms Sep 15 10:38:16.227: INFO: Got endpoints: latency-svc-pnjms [859.948859ms] Sep 15 10:38:16.249: INFO: Created: latency-svc-wh8t9 Sep 15 10:38:16.257: INFO: Got endpoints: latency-svc-wh8t9 [827.525686ms] Sep 15 10:38:16.278: INFO: Created: latency-svc-jxnmr Sep 15 10:38:16.344: INFO: Got endpoints: latency-svc-jxnmr [833.280941ms] Sep 15 10:38:16.376: INFO: Created: latency-svc-tgsxl Sep 15 10:38:16.396: INFO: Got endpoints: latency-svc-tgsxl [821.190125ms] Sep 15 10:38:16.418: INFO: Created: latency-svc-vdhkz Sep 15 10:38:16.509: INFO: Got endpoints: latency-svc-vdhkz [896.828407ms] Sep 15 10:38:16.542: INFO: Created: latency-svc-jtg7l Sep 15 10:38:16.547: INFO: Got endpoints: latency-svc-jtg7l [897.343761ms] Sep 15 10:38:16.598: INFO: Created: latency-svc-mthmn Sep 15 10:38:16.689: INFO: Got endpoints: latency-svc-mthmn [946.411886ms] Sep 15 10:38:16.694: INFO: Created: latency-svc-xpd8q Sep 15 10:38:16.702: INFO: Got endpoints: latency-svc-xpd8q [926.980868ms] Sep 15 10:38:16.734: INFO: Created: latency-svc-gsgfk Sep 15 10:38:16.758: INFO: Got endpoints: latency-svc-gsgfk [916.47205ms] Sep 15 10:38:16.782: INFO: Created: latency-svc-pbgbv Sep 15 10:38:16.839: INFO: Got endpoints: latency-svc-pbgbv [943.389809ms] Sep 15 10:38:16.874: INFO: Created: latency-svc-zndd9 Sep 15 10:38:16.908: INFO: Got endpoints: latency-svc-zndd9 [963.574276ms] Sep 15 10:38:16.934: INFO: Created: latency-svc-gdhh9 Sep 15 10:38:17.000: INFO: Got endpoints: latency-svc-gdhh9 [964.06451ms] Sep 15 10:38:17.002: INFO: Created: latency-svc-8twgf Sep 15 10:38:17.046: INFO: Got endpoints: latency-svc-8twgf [981.633161ms] Sep 15 10:38:17.076: INFO: Created: latency-svc-xl9s7 Sep 15 10:38:17.093: INFO: Got endpoints: latency-svc-xl9s7 [987.37322ms] Sep 15 10:38:17.144: INFO: Created: latency-svc-f6ff6 Sep 15 10:38:17.181: INFO: Got endpoints: latency-svc-f6ff6 [995.267706ms] Sep 15 10:38:17.181: INFO: Created: latency-svc-56nn9 Sep 15 10:38:17.202: INFO: Got endpoints: latency-svc-56nn9 [975.07173ms] Sep 15 10:38:17.238: INFO: Created: latency-svc-gmjss Sep 15 10:38:17.294: INFO: Got endpoints: latency-svc-gmjss [1.036600423s] Sep 15 10:38:17.306: INFO: Created: latency-svc-8tgfp Sep 15 10:38:17.318: INFO: Got endpoints: latency-svc-8tgfp [973.750379ms] Sep 15 10:38:17.343: INFO: Created: latency-svc-vgfg7 Sep 15 10:38:17.361: INFO: Got endpoints: latency-svc-vgfg7 [964.347142ms] Sep 15 10:38:17.444: INFO: Created: latency-svc-nkj8c Sep 15 10:38:17.452: INFO: Got endpoints: latency-svc-nkj8c [942.448077ms] Sep 15 10:38:17.478: INFO: Created: latency-svc-k7bvr Sep 15 10:38:17.498: INFO: Got endpoints: latency-svc-k7bvr [951.116806ms] Sep 15 10:38:17.521: INFO: Created: latency-svc-zgk2c Sep 15 10:38:17.605: INFO: Got endpoints: latency-svc-zgk2c [915.368629ms] Sep 15 10:38:17.607: INFO: Created: latency-svc-7mnkp Sep 15 10:38:17.630: INFO: Got endpoints: latency-svc-7mnkp [927.777266ms] Sep 15 10:38:17.665: INFO: Created: latency-svc-9b59c Sep 15 10:38:17.678: INFO: Got endpoints: latency-svc-9b59c [920.085666ms] Sep 15 10:38:17.700: INFO: Created: latency-svc-dcwjd Sep 15 10:38:17.754: INFO: Got endpoints: latency-svc-dcwjd [915.57713ms] Sep 15 10:38:17.786: INFO: Created: latency-svc-gf657 Sep 15 10:38:17.792: INFO: Got endpoints: latency-svc-gf657 [884.087662ms] Sep 15 10:38:17.850: INFO: Created: latency-svc-8dvp5 Sep 15 10:38:17.923: INFO: Got endpoints: latency-svc-8dvp5 [922.288229ms] Sep 15 10:38:17.925: INFO: Created: latency-svc-vjws4 Sep 15 10:38:17.931: INFO: Got endpoints: latency-svc-vjws4 [884.433935ms] Sep 15 10:38:17.953: INFO: Created: latency-svc-gbrcs Sep 15 10:38:17.967: INFO: Got endpoints: latency-svc-gbrcs [873.734681ms] Sep 15 10:38:17.991: INFO: Created: latency-svc-x4n7q Sep 15 10:38:18.003: INFO: Got endpoints: latency-svc-x4n7q [822.371561ms] Sep 15 10:38:18.103: INFO: Created: latency-svc-kkv8s Sep 15 10:38:18.117: INFO: Got endpoints: latency-svc-kkv8s [915.276295ms] Sep 15 10:38:18.138: INFO: Created: latency-svc-sdncd Sep 15 10:38:18.168: INFO: Got endpoints: latency-svc-sdncd [874.551953ms] Sep 15 10:38:18.264: INFO: Created: latency-svc-x6vl7 Sep 15 10:38:18.290: INFO: Got endpoints: latency-svc-x6vl7 [972.008866ms] Sep 15 10:38:18.320: INFO: Created: latency-svc-d4xfx Sep 15 10:38:18.330: INFO: Got endpoints: latency-svc-d4xfx [968.933792ms] Sep 15 10:38:18.354: INFO: Created: latency-svc-tf7s2 Sep 15 10:38:18.413: INFO: Got endpoints: latency-svc-tf7s2 [961.623949ms] Sep 15 10:38:18.418: INFO: Created: latency-svc-5q6f7 Sep 15 10:38:18.425: INFO: Got endpoints: latency-svc-5q6f7 [927.125163ms] Sep 15 10:38:18.444: INFO: Created: latency-svc-4lr4l Sep 15 10:38:18.455: INFO: Got endpoints: latency-svc-4lr4l [850.030026ms] Sep 15 10:38:18.500: INFO: Created: latency-svc-7zsd6 Sep 15 10:38:18.575: INFO: Got endpoints: latency-svc-7zsd6 [945.325212ms] Sep 15 10:38:18.586: INFO: Created: latency-svc-6gjnm Sep 15 10:38:18.611: INFO: Got endpoints: latency-svc-6gjnm [933.518186ms] Sep 15 10:38:18.648: INFO: Created: latency-svc-fphfp Sep 15 10:38:18.660: INFO: Got endpoints: latency-svc-fphfp [905.627556ms] Sep 15 10:38:18.756: INFO: Created: latency-svc-xhfx2 Sep 15 10:38:18.760: INFO: Got endpoints: latency-svc-xhfx2 [967.373871ms] Sep 15 10:38:18.800: INFO: Created: latency-svc-v78bg Sep 15 10:38:18.830: INFO: Got endpoints: latency-svc-v78bg [907.032861ms] Sep 15 10:38:18.934: INFO: Created: latency-svc-2vnmr Sep 15 10:38:18.959: INFO: Got endpoints: latency-svc-2vnmr [1.028501641s] Sep 15 10:38:19.016: INFO: Created: latency-svc-qk47s Sep 15 10:38:19.033: INFO: Got endpoints: latency-svc-qk47s [1.065351154s] Sep 15 10:38:19.072: INFO: Created: latency-svc-6dq8s Sep 15 10:38:19.098: INFO: Got endpoints: latency-svc-6dq8s [1.094250656s] Sep 15 10:38:19.122: INFO: Created: latency-svc-w2npj Sep 15 10:38:19.135: INFO: Got endpoints: latency-svc-w2npj [1.01806744s] Sep 15 10:38:19.264: INFO: Created: latency-svc-7krkw Sep 15 10:38:19.267: INFO: Got endpoints: latency-svc-7krkw [1.09920013s] Sep 15 10:38:19.340: INFO: Created: latency-svc-kcvt2 Sep 15 10:38:19.438: INFO: Got endpoints: latency-svc-kcvt2 [1.148276391s] Sep 15 10:38:19.442: INFO: Created: latency-svc-2292q Sep 15 10:38:19.447: INFO: Got endpoints: latency-svc-2292q [1.117632462s] Sep 15 10:38:19.490: INFO: Created: latency-svc-7mp46 Sep 15 10:38:19.508: INFO: Got endpoints: latency-svc-7mp46 [1.094497209s] Sep 15 10:38:19.531: INFO: Created: latency-svc-vr4gg Sep 15 10:38:19.599: INFO: Got endpoints: latency-svc-vr4gg [1.174220173s] Sep 15 10:38:19.607: INFO: Created: latency-svc-47zq5 Sep 15 10:38:19.616: INFO: Got endpoints: latency-svc-47zq5 [1.161055647s] Sep 15 10:38:19.674: INFO: Created: latency-svc-nnt86 Sep 15 10:38:19.683: INFO: Got endpoints: latency-svc-nnt86 [1.107560191s] Sep 15 10:38:19.767: INFO: Created: latency-svc-b84kz Sep 15 10:38:19.771: INFO: Got endpoints: latency-svc-b84kz [1.159313893s] Sep 15 10:38:19.807: INFO: Created: latency-svc-lklcb Sep 15 10:38:19.827: INFO: Got endpoints: latency-svc-lklcb [1.166797991s] Sep 15 10:38:19.929: INFO: Created: latency-svc-2966z Sep 15 10:38:19.955: INFO: Got endpoints: latency-svc-2966z [1.195577537s] Sep 15 10:38:19.956: INFO: Created: latency-svc-67l57 Sep 15 10:38:19.973: INFO: Got endpoints: latency-svc-67l57 [1.143235068s] Sep 15 10:38:20.006: INFO: Created: latency-svc-gkm2c Sep 15 10:38:20.026: INFO: Got endpoints: latency-svc-gkm2c [1.066180366s] Sep 15 10:38:20.085: INFO: Created: latency-svc-j4clc Sep 15 10:38:20.088: INFO: Got endpoints: latency-svc-j4clc [1.055023242s] Sep 15 10:38:20.126: INFO: Created: latency-svc-q72xs Sep 15 10:38:20.140: INFO: Got endpoints: latency-svc-q72xs [1.041926494s] Sep 15 10:38:20.159: INFO: Created: latency-svc-trsj2 Sep 15 10:38:20.263: INFO: Got endpoints: latency-svc-trsj2 [1.127913509s] Sep 15 10:38:20.272: INFO: Created: latency-svc-rz4d5 Sep 15 10:38:20.290: INFO: Got endpoints: latency-svc-rz4d5 [1.022666526s] Sep 15 10:38:20.311: INFO: Created: latency-svc-zw5qs Sep 15 10:38:20.320: INFO: Got endpoints: latency-svc-zw5qs [881.869478ms] Sep 15 10:38:20.342: INFO: Created: latency-svc-9n6vv Sep 15 10:38:20.350: INFO: Got endpoints: latency-svc-9n6vv [902.961763ms] Sep 15 10:38:20.450: INFO: Created: latency-svc-rjxg6 Sep 15 10:38:20.455: INFO: Got endpoints: latency-svc-rjxg6 [947.130255ms] Sep 15 10:38:20.495: INFO: Created: latency-svc-9wp5x Sep 15 10:38:20.515: INFO: Got endpoints: latency-svc-9wp5x [915.329642ms] Sep 15 10:38:20.539: INFO: Created: latency-svc-blwlg Sep 15 10:38:20.611: INFO: Got endpoints: latency-svc-blwlg [995.166108ms] Sep 15 10:38:20.613: INFO: Created: latency-svc-wff22 Sep 15 10:38:20.621: INFO: Got endpoints: latency-svc-wff22 [938.343439ms] Sep 15 10:38:20.647: INFO: Created: latency-svc-jr2p5 Sep 15 10:38:20.681: INFO: Got endpoints: latency-svc-jr2p5 [909.880205ms] Sep 15 10:38:20.798: INFO: Created: latency-svc-q7l84 Sep 15 10:38:20.802: INFO: Got endpoints: latency-svc-q7l84 [974.617909ms] Sep 15 10:38:20.834: INFO: Created: latency-svc-nchlt Sep 15 10:38:20.844: INFO: Got endpoints: latency-svc-nchlt [888.582781ms] Sep 15 10:38:20.863: INFO: Created: latency-svc-kkpz4 Sep 15 10:38:20.887: INFO: Got endpoints: latency-svc-kkpz4 [913.968755ms] Sep 15 10:38:20.958: INFO: Created: latency-svc-mppnk Sep 15 10:38:20.990: INFO: Created: latency-svc-qhn4r Sep 15 10:38:20.991: INFO: Got endpoints: latency-svc-mppnk [965.292213ms] Sep 15 10:38:21.017: INFO: Got endpoints: latency-svc-qhn4r [929.290425ms] Sep 15 10:38:21.043: INFO: Created: latency-svc-kdnjm Sep 15 10:38:21.102: INFO: Got endpoints: latency-svc-kdnjm [962.288435ms] Sep 15 10:38:21.125: INFO: Created: latency-svc-77p2w Sep 15 10:38:21.139: INFO: Got endpoints: latency-svc-77p2w [875.978216ms] Sep 15 10:38:21.173: INFO: Created: latency-svc-tkr7s Sep 15 10:38:21.200: INFO: Got endpoints: latency-svc-tkr7s [909.702201ms] Sep 15 10:38:21.270: INFO: Created: latency-svc-cm4tp Sep 15 10:38:21.289: INFO: Got endpoints: latency-svc-cm4tp [968.913869ms] Sep 15 10:38:21.341: INFO: Created: latency-svc-d4hb9 Sep 15 10:38:21.350: INFO: Got endpoints: latency-svc-d4hb9 [999.180498ms] Sep 15 10:38:21.420: INFO: Created: latency-svc-tk6qr Sep 15 10:38:21.425: INFO: Got endpoints: latency-svc-tk6qr [969.906154ms] Sep 15 10:38:21.457: INFO: Created: latency-svc-4j7kf Sep 15 10:38:21.487: INFO: Got endpoints: latency-svc-4j7kf [972.61431ms] Sep 15 10:38:21.517: INFO: Created: latency-svc-6fp45 Sep 15 10:38:21.575: INFO: Got endpoints: latency-svc-6fp45 [964.012495ms] Sep 15 10:38:21.575: INFO: Latencies: [83.349282ms 137.135504ms 173.212257ms 222.950195ms 287.492924ms 329.942579ms 437.780889ms 492.308267ms 585.365355ms 594.167242ms 672.474447ms 786.399265ms 821.190125ms 822.371561ms 827.525686ms 830.698659ms 833.280941ms 835.714064ms 850.030026ms 858.779161ms 859.948859ms 860.381148ms 861.426917ms 864.719027ms 865.836824ms 868.439791ms 869.987234ms 872.318656ms 873.734681ms 874.551953ms 875.978216ms 876.989054ms 879.62298ms 881.869478ms 884.087662ms 884.433935ms 884.793534ms 885.179447ms 888.582781ms 891.998726ms 893.593671ms 896.702624ms 896.828407ms 897.343761ms 897.349801ms 897.484322ms 902.961763ms 903.912935ms 903.959867ms 905.627556ms 907.032861ms 908.991792ms 909.661293ms 909.702201ms 909.738971ms 909.880205ms 913.968755ms 915.276295ms 915.329642ms 915.368629ms 915.533671ms 915.57713ms 916.47205ms 916.667449ms 920.085666ms 921.672335ms 921.963357ms 922.288229ms 924.168252ms 925.398382ms 926.980868ms 927.125163ms 927.346671ms 927.777266ms 928.013395ms 929.290425ms 933.518186ms 934.832697ms 938.343439ms 940.099019ms 940.971849ms 941.626073ms 942.448077ms 943.389809ms 943.676632ms 945.325212ms 945.651731ms 946.17905ms 946.411886ms 946.475846ms 947.130255ms 948.344076ms 951.116806ms 951.763079ms 954.659674ms 961.623949ms 962.288435ms 963.574276ms 964.012495ms 964.06451ms 964.347142ms 965.292213ms 967.373871ms 968.038828ms 968.913869ms 968.933792ms 969.064792ms 969.08336ms 969.331367ms 969.543346ms 969.906154ms 970.953076ms 972.008866ms 972.61431ms 973.750379ms 974.617909ms 974.846806ms 975.07173ms 978.109972ms 981.633161ms 987.074515ms 987.157832ms 987.37322ms 987.963165ms 988.271933ms 989.849555ms 992.288257ms 993.85572ms 995.166108ms 995.267706ms 997.653297ms 999.180498ms 1.010283496s 1.014985677s 1.017529535s 1.01806744s 1.022666526s 1.024587466s 1.025807797s 1.028048765s 1.028065765s 1.028501641s 1.033058556s 1.034225894s 1.034380158s 1.035133971s 1.035376936s 1.036600423s 1.037966946s 1.039958238s 1.040816713s 1.041926494s 1.044556333s 1.051911751s 1.052476397s 1.053317305s 1.053400774s 1.054207428s 1.055023242s 1.056997487s 1.065170739s 1.065351154s 1.066180366s 1.068871247s 1.07075455s 1.0769655s 1.086673792s 1.094250656s 1.094497209s 1.099048818s 1.09920013s 1.103256298s 1.103352861s 1.104722794s 1.107108151s 1.107560191s 1.109586419s 1.112117045s 1.117632462s 1.127913509s 1.128940456s 1.139091539s 1.140029309s 1.143235068s 1.143805611s 1.148276391s 1.148434434s 1.159033875s 1.159313893s 1.161055647s 1.16420011s 1.166797991s 1.166827811s 1.171522402s 1.173094362s 1.174220173s 1.176885911s 1.187154056s 1.195577537s 1.198187785s] Sep 15 10:38:21.576: INFO: 50 %ile: 964.347142ms Sep 15 10:38:21.576: INFO: 90 %ile: 1.128940456s Sep 15 10:38:21.576: INFO: 99 %ile: 1.195577537s Sep 15 10:38:21.576: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:38:21.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4788" for this suite. • [SLOW TEST:17.463 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":44,"skipped":834,"failed":0} [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:38:21.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Sep 15 10:38:21.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-872' Sep 15 10:38:21.998: INFO: stderr: "" Sep 15 10:38:21.998: INFO: stdout: "pod/pause created\n" Sep 15 10:38:21.998: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 15 10:38:21.998: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-872" to be "running and ready" Sep 15 10:38:22.027: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 29.089156ms Sep 15 10:38:24.031: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033136937s Sep 15 10:38:26.035: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.036972955s Sep 15 10:38:26.035: INFO: Pod "pause" satisfied condition "running and ready" Sep 15 10:38:26.035: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Sep 15 10:38:26.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-872' Sep 15 10:38:26.136: INFO: stderr: "" Sep 15 10:38:26.136: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 15 10:38:26.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-872' Sep 15 10:38:26.236: INFO: stderr: "" Sep 15 10:38:26.236: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 15 10:38:26.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-872' Sep 15 10:38:26.336: INFO: stderr: "" Sep 15 10:38:26.336: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 15 10:38:26.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-872' Sep 15 10:38:26.431: INFO: stderr: "" Sep 15 10:38:26.431: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Sep 15 10:38:26.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-872' Sep 15 10:38:26.577: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 15 10:38:26.577: INFO: stdout: "pod \"pause\" force deleted\n" Sep 15 10:38:26.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-872' Sep 15 10:38:26.738: INFO: stderr: "No resources found in kubectl-872 namespace.\n" Sep 15 10:38:26.738: INFO: stdout: "" Sep 15 10:38:26.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-872 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 15 10:38:26.933: INFO: stderr: "" Sep 15 10:38:26.933: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:38:26.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-872" for this suite. • [SLOW TEST:5.508 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":45,"skipped":834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:38:27.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 15 10:38:28.108: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 15 10:38:30.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763108, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763108, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763108, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763108, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 10:38:32.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763108, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763108, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763108, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763108, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 10:38:35.694: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:38:35.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:38:37.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6252" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:10.342 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":46,"skipped":882,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:38:37.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 15 10:38:37.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2364' Sep 15 10:38:38.547: INFO: stderr: "" Sep 15 10:38:38.547: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 15 10:38:38.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2364' Sep 15 10:38:38.743: INFO: stderr: "" Sep 15 10:38:38.743: INFO: stdout: "update-demo-nautilus-jnrfd update-demo-nautilus-qf7m5 " Sep 15 10:38:38.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnrfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2364' Sep 15 10:38:38.873: INFO: stderr: "" Sep 15 10:38:38.873: INFO: stdout: "" Sep 15 10:38:38.873: INFO: update-demo-nautilus-jnrfd is created but not running Sep 15 10:38:43.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2364' Sep 15 10:38:44.062: INFO: stderr: "" Sep 15 10:38:44.062: INFO: stdout: "update-demo-nautilus-jnrfd update-demo-nautilus-qf7m5 " Sep 15 10:38:44.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnrfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2364' Sep 15 10:38:44.241: INFO: stderr: "" Sep 15 10:38:44.241: INFO: stdout: "" Sep 15 10:38:44.241: INFO: update-demo-nautilus-jnrfd is created but not running Sep 15 10:38:49.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2364' Sep 15 10:38:49.355: INFO: stderr: "" Sep 15 10:38:49.355: INFO: stdout: "update-demo-nautilus-jnrfd update-demo-nautilus-qf7m5 " Sep 15 10:38:49.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnrfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2364' Sep 15 10:38:49.485: INFO: stderr: "" Sep 15 10:38:49.485: INFO: stdout: "true" Sep 15 10:38:49.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnrfd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2364' Sep 15 10:38:49.600: INFO: stderr: "" Sep 15 10:38:49.600: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 15 10:38:49.600: INFO: validating pod update-demo-nautilus-jnrfd Sep 15 10:38:49.607: INFO: got data: { "image": "nautilus.jpg" } Sep 15 10:38:49.607: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 15 10:38:49.607: INFO: update-demo-nautilus-jnrfd is verified up and running Sep 15 10:38:49.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qf7m5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2364' Sep 15 10:38:49.750: INFO: stderr: "" Sep 15 10:38:49.750: INFO: stdout: "true" Sep 15 10:38:49.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qf7m5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2364' Sep 15 10:38:49.893: INFO: stderr: "" Sep 15 10:38:49.893: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 15 10:38:49.893: INFO: validating pod update-demo-nautilus-qf7m5 Sep 15 10:38:49.901: INFO: got data: { "image": "nautilus.jpg" } Sep 15 10:38:49.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 15 10:38:49.901: INFO: update-demo-nautilus-qf7m5 is verified up and running STEP: using delete to clean up resources Sep 15 10:38:49.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2364' Sep 15 10:38:50.061: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 15 10:38:50.061: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 15 10:38:50.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2364' Sep 15 10:38:50.234: INFO: stderr: "No resources found in kubectl-2364 namespace.\n" Sep 15 10:38:50.234: INFO: stdout: "" Sep 15 10:38:50.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2364 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 15 10:38:50.369: INFO: stderr: "" Sep 15 10:38:50.370: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:38:50.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2364" for this suite. • [SLOW TEST:12.985 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":47,"skipped":883,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:38:50.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:38:51.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2697" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":48,"skipped":887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:38:51.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 15 10:38:52.942: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:39:11.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2056" for this suite. • [SLOW TEST:19.677 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":49,"skipped":917,"failed":0} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:39:11.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:39:15.265: INFO: Waiting up to 5m0s for pod "client-envvars-210a08c1-f545-4edd-bf9e-41d14f714c21" in namespace "pods-6976" to be "Succeeded or Failed" Sep 15 10:39:15.267: INFO: Pod "client-envvars-210a08c1-f545-4edd-bf9e-41d14f714c21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337502ms Sep 15 10:39:17.271: INFO: Pod "client-envvars-210a08c1-f545-4edd-bf9e-41d14f714c21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00648007s Sep 15 10:39:19.276: INFO: Pod "client-envvars-210a08c1-f545-4edd-bf9e-41d14f714c21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011037512s STEP: Saw pod success Sep 15 10:39:19.276: INFO: Pod "client-envvars-210a08c1-f545-4edd-bf9e-41d14f714c21" satisfied condition "Succeeded or Failed" Sep 15 10:39:19.278: INFO: Trying to get logs from node kali-worker pod client-envvars-210a08c1-f545-4edd-bf9e-41d14f714c21 container env3cont: STEP: delete the pod Sep 15 10:39:19.343: INFO: Waiting for pod client-envvars-210a08c1-f545-4edd-bf9e-41d14f714c21 to disappear Sep 15 10:39:19.352: INFO: Pod client-envvars-210a08c1-f545-4edd-bf9e-41d14f714c21 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:39:19.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6976" for this suite. • [SLOW TEST:8.304 seconds] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":50,"skipped":917,"failed":0} SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:39:19.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 15 10:39:23.940: INFO: Successfully updated pod "pod-update-c394fe13-62dd-4b0a-a4fb-833e8c2c39b1" STEP: verifying the updated pod is in kubernetes Sep 15 10:39:23.964: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:39:23.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7394" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":51,"skipped":920,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:39:23.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:39:24.043: INFO: Creating deployment "webserver-deployment" Sep 15 10:39:24.053: INFO: Waiting for observed generation 1 Sep 15 10:39:26.183: INFO: Waiting for all required pods to come up Sep 15 10:39:26.189: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 15 10:39:50.197: INFO: Waiting for deployment "webserver-deployment" to complete Sep 15 10:39:50.317: INFO: Updating deployment "webserver-deployment" with a non-existent image Sep 15 10:39:50.326: INFO: Updating deployment webserver-deployment Sep 15 10:39:50.326: INFO: Waiting for observed generation 2 Sep 15 10:39:52.906: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 15 10:39:53.376: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 15 10:39:53.379: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 15 10:39:53.679: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 15 10:39:53.679: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 15 10:39:53.747: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 15 10:39:53.941: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Sep 15 10:39:53.942: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Sep 15 10:39:53.999: INFO: Updating deployment webserver-deployment Sep 15 10:39:53.999: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Sep 15 10:39:54.301: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 15 10:39:57.421: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 15 10:39:57.626: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9682 /apis/apps/v1/namespaces/deployment-9682/deployments/webserver-deployment d51d2852-b4cd-4c8b-82e0-14528e4d2d5f 433977 3 2020-09-15 10:39:24 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-15 10:39:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037261c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-15 10:39:54 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-09-15 10:39:54 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Sep 15 10:39:58.147: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9682 /apis/apps/v1/namespaces/deployment-9682/replicasets/webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 433971 3 2020-09-15 10:39:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment d51d2852-b4cd-4c8b-82e0-14528e4d2d5f 0xc0036a4f17 0xc0036a4f18}] [] [{kube-controller-manager Update apps/v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d51d2852-b4cd-4c8b-82e0-14528e4d2d5f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036a4fa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 15 10:39:58.148: INFO: All old ReplicaSets of Deployment "webserver-deployment": Sep 15 10:39:58.148: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-9682 /apis/apps/v1/namespaces/deployment-9682/replicasets/webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 433972 3 2020-09-15 10:39:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment d51d2852-b4cd-4c8b-82e0-14528e4d2d5f 0xc0036a5007 0xc0036a5008}] [] [{kube-controller-manager Update apps/v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d51d2852-b4cd-4c8b-82e0-14528e4d2d5f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036a5078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Sep 15 10:39:58.469: INFO: Pod "webserver-deployment-795d758f88-54j2k" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-54j2k webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-54j2k 954ce6e2-32fa-4eb6-b1bd-dec06e89b95f 433876 0 2020-09-15 10:39:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e4537 0xc0036e4538}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.469: INFO: Pod "webserver-deployment-795d758f88-7lpdt" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7lpdt webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-7lpdt b83c53b1-c753-438c-8a45-d8e2bbeffe64 434004 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e4760 0xc0036e4761}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.469: INFO: Pod "webserver-deployment-795d758f88-c76kk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c76kk webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-c76kk 5b014cf6-8a99-417a-be77-95ea8db0c86f 433854 0 2020-09-15 10:39:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e4970 0xc0036e4971}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.469: INFO: Pod "webserver-deployment-795d758f88-c79v8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c79v8 webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-c79v8 bec0f441-7c57-4243-bde8-9063bab54ff6 434037 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e4bd0 0xc0036e4bd1}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.470: INFO: Pod "webserver-deployment-795d758f88-hkwmf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-hkwmf webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-hkwmf 72493986-b73f-492f-9191-44af649875a6 433867 0 2020-09-15 10:39:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e4e20 0xc0036e4e21}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.470: INFO: Pod "webserver-deployment-795d758f88-jm25h" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jm25h webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-jm25h 8f30c6d1-5967-4d59-8d2d-da1631bfea5c 433992 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e5040 0xc0036e5041}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.470: INFO: Pod "webserver-deployment-795d758f88-m7rm5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-m7rm5 webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-m7rm5 fc7207c4-97bd-44c6-ae88-7358cae28766 433967 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e5210 0xc0036e5211}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.470: INFO: Pod "webserver-deployment-795d758f88-mgjhv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mgjhv webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-mgjhv 34fe1e87-36e0-4891-9b3c-04ab17782d84 433980 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e53d0 0xc0036e53d1}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.471: INFO: Pod "webserver-deployment-795d758f88-sfshn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-sfshn webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-sfshn 5c2077fb-a8a6-49e6-a6d5-57ca584ca331 433893 0 2020-09-15 10:39:52 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e5590 0xc0036e5591}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.471: INFO: Pod "webserver-deployment-795d758f88-tt75h" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tt75h webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-tt75h d2fb89f8-b9c9-4f7a-94bc-87a14544054e 433976 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e5820 0xc0036e5821}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.471: INFO: Pod "webserver-deployment-795d758f88-wfqnm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wfqnm webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-wfqnm 80bb9230-375c-492f-b07c-820d5cb4bec8 434008 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e5a40 0xc0036e5a41}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.471: INFO: Pod "webserver-deployment-795d758f88-wnrrz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wnrrz webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-wnrrz 117c751a-d425-4877-8169-481a0b875cf4 433944 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e5c00 0xc0036e5c01}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.471: INFO: Pod "webserver-deployment-795d758f88-zd8mp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zd8mp webserver-deployment-795d758f88- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-795d758f88-zd8mp 221e29b4-0ba1-496e-b321-7daed188f3a6 433891 0 2020-09-15 10:39:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d527a3f-f577-44a2-b791-bced20bdd4e8 0xc0036e5e10 0xc0036e5e11}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d527a3f-f577-44a2-b791-bced20bdd4e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.471: INFO: Pod "webserver-deployment-dd94f59b7-2fd8v" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2fd8v webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-2fd8v a7bd48f8-ea0a-4035-9eac-f25bd92a5f08 433690 0 2020-09-15 10:39:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003760010 0xc003760011}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.37\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.37,StartTime:2020-09-15 10:39:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 10:39:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d5cc47542c1bf2da0ee1775917efd561f3b617166da479a2e51c333b3eca71ee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.471: INFO: Pod "webserver-deployment-dd94f59b7-49xj2" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-49xj2 webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-49xj2 6126ca2e-e744-49ce-a19f-2811d560ea11 433765 0 2020-09-15 10:39:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003760217 0xc003760218}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.40,StartTime:2020-09-15 10:39:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 10:39:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ed4c1cf5ae248c4547e3e4128699453654d2fd6246a083f860024dfc25a54b7c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.472: INFO: Pod "webserver-deployment-dd94f59b7-8jmkz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8jmkz webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-8jmkz 8e92c3ba-a36a-4000-936b-0f4b1414c61b 433985 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003760447 0xc003760448}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.472: INFO: Pod "webserver-deployment-dd94f59b7-95hpn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-95hpn webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-95hpn 5f2a61f1-fbd1-4af3-a0c3-a0d425b5f79f 433987 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003760637 0xc003760638}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.472: INFO: Pod "webserver-deployment-dd94f59b7-bdjbp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bdjbp webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-bdjbp d99baf69-2415-448c-93d1-d05ccf242981 433822 0 2020-09-15 10:39:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003760807 0xc003760808}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.44,StartTime:2020-09-15 10:39:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 10:39:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b4057eac62e48ef7fb994f5a0dc862c573343a1e38f9d29cf3628a665ddf9d57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.472: INFO: Pod "webserver-deployment-dd94f59b7-bp4sj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bp4sj webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-bp4sj b8d4ea44-d92c-4ca7-aedd-96cc499f1bc9 434019 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003760a87 0xc003760a88}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.472: INFO: Pod "webserver-deployment-dd94f59b7-ct4jm" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ct4jm webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-ct4jm 1512ddfc-966e-48bd-b63b-b0d30255723c 434029 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003760c67 0xc003760c68}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.473: INFO: Pod "webserver-deployment-dd94f59b7-fx4bs" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fx4bs webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-fx4bs 954a7999-a180-4304-8394-d07ef2c4f0c9 434043 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003760ed7 0xc003760ed8}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.473: INFO: Pod "webserver-deployment-dd94f59b7-fzdhl" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fzdhl webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-fzdhl 4abda636-8837-4d26-be93-7a2c7d3ac10b 433793 0 2020-09-15 10:39:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003761087 0xc003761088}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.42,StartTime:2020-09-15 10:39:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 10:39:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b1aee98e8dc167ac099332460fdbc4ad41ba2f5379efe7665269187e6819729c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.473: INFO: Pod "webserver-deployment-dd94f59b7-hljvp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hljvp webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-hljvp 7c915f07-15c3-4f5a-8c9e-c1654bb5723d 433762 0 2020-09-15 10:39:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc0037612e7 0xc0037612e8}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.41,StartTime:2020-09-15 10:39:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 10:39:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://16602f9f4c5d5d497b6a50ebcbfed02d1029c01dc90e549df99a9c230807a5d7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.473: INFO: Pod "webserver-deployment-dd94f59b7-jd8nn" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jd8nn webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-jd8nn 5d5894d8-37d4-4673-ac8d-9dcecac51ace 433776 0 2020-09-15 10:39:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003761557 0xc003761558}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.43,StartTime:2020-09-15 10:39:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 10:39:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7b2a7d61539af5aedccaef50ce2e545b454e3a3affb94d62da6ccfb9fdc37bbe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.473: INFO: Pod "webserver-deployment-dd94f59b7-ls575" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ls575 webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-ls575 92e373ae-da48-4614-87de-029d06a31f46 434001 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003761757 0xc003761758}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.473: INFO: Pod "webserver-deployment-dd94f59b7-mr6mf" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mr6mf webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-mr6mf 3c8bccce-4e58-4c29-945e-535a02e7ac97 433703 0 2020-09-15 10:39:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003761977 0xc003761978}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.40,StartTime:2020-09-15 10:39:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 10:39:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f72793df1cd37a080a1c039a6138e25899361816a0a5fca1481a2436d5eb1c53,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.474: INFO: Pod "webserver-deployment-dd94f59b7-nr2mc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nr2mc webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-nr2mc 077a2b8f-5915-41d7-bd43-a79a66a59aa6 433996 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003761bd7 0xc003761bd8}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.474: INFO: Pod "webserver-deployment-dd94f59b7-qrhtd" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qrhtd webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-qrhtd cd75a463-3818-49f4-8db4-cf615197f2b9 434034 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003761db7 0xc003761db8}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.474: INFO: Pod "webserver-deployment-dd94f59b7-st5db" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-st5db webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-st5db 24e225ec-5e05-4d6c-a272-7c56c8d83d52 434041 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003761f77 0xc003761f78}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.474: INFO: Pod "webserver-deployment-dd94f59b7-tsxnx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tsxnx webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-tsxnx 799ab4c9-867f-4cd1-a8ba-8adf5c17e355 434021 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003788107 0xc003788108}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.474: INFO: Pod "webserver-deployment-dd94f59b7-vlkvk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vlkvk webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-vlkvk f506fc3f-8595-4258-ab93-2054455e1db3 433673 0 2020-09-15 10:39:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003788297 0xc003788298}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.36\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.36,StartTime:2020-09-15 10:39:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 10:39:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://523f2f93ab3929108d61650c8baa220f088dc1abd9430533918c7e3400feab83,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.474: INFO: Pod "webserver-deployment-dd94f59b7-vz7f5" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vz7f5 webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-vz7f5 c52b1b1b-fff7-4756-acc7-4a0282cdf62e 433968 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003788447 0xc003788448}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 10:39:58.475: INFO: Pod "webserver-deployment-dd94f59b7-w5qzp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-w5qzp webserver-deployment-dd94f59b7- deployment-9682 /api/v1/namespaces/deployment-9682/pods/webserver-deployment-dd94f59b7-w5qzp e538d20e-a782-43a6-adb8-c701a1f4bee0 433930 0 2020-09-15 10:39:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 80943d0a-c654-41f2-808a-b13f218d0a5a 0xc003788570 0xc003788571}] [] [{kube-controller-manager Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80943d0a-c654-41f2-808a-b13f218d0a5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 10:39:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrvmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrvmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrvmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 10:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-15 10:39:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:39:58.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9682" for this suite. • [SLOW TEST:34.723 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":52,"skipped":923,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:39:58.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-11d51bbb-b71d-4813-ae4b-a0f3fb74baa0 STEP: Creating a pod to test consume secrets Sep 15 10:39:59.464: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801" in namespace "projected-7007" to be "Succeeded or Failed" Sep 15 10:40:00.439: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Pending", Reason="", readiness=false. Elapsed: 974.934976ms Sep 15 10:40:02.468: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Pending", Reason="", readiness=false. Elapsed: 3.003980146s Sep 15 10:40:04.552: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Pending", Reason="", readiness=false. Elapsed: 5.087933887s Sep 15 10:40:06.969: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Pending", Reason="", readiness=false. Elapsed: 7.504447668s Sep 15 10:40:09.057: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Pending", Reason="", readiness=false. Elapsed: 9.593127996s Sep 15 10:40:11.243: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Pending", Reason="", readiness=false. Elapsed: 11.778692282s Sep 15 10:40:13.305: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Running", Reason="", readiness=true. Elapsed: 13.840953647s Sep 15 10:40:15.922: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Running", Reason="", readiness=true. Elapsed: 16.457767289s Sep 15 10:40:17.937: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Running", Reason="", readiness=true. Elapsed: 18.472313972s Sep 15 10:40:19.980: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Running", Reason="", readiness=true. Elapsed: 20.515681899s Sep 15 10:40:21.983: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.519063636s STEP: Saw pod success Sep 15 10:40:21.983: INFO: Pod "pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801" satisfied condition "Succeeded or Failed" Sep 15 10:40:21.989: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801 container secret-volume-test: STEP: delete the pod Sep 15 10:40:22.103: INFO: Waiting for pod pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801 to disappear Sep 15 10:40:22.105: INFO: Pod pod-projected-secrets-0da26fa8-7bac-4547-8315-b6be2744b801 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:40:22.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7007" for this suite. • [SLOW TEST:23.398 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":53,"skipped":927,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:40:22.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7586 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7586 I0915 10:40:22.446555 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7586, replica count: 2 I0915 10:40:25.496984 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 10:40:28.497256 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 10:40:28.497: INFO: Creating new exec pod Sep 15 10:40:33.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7586 execpod6kb29 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 15 10:40:33.880: INFO: stderr: "I0915 10:40:33.780789 548 log.go:181] (0xc000164f20) (0xc000bc25a0) Create stream\nI0915 10:40:33.780854 548 log.go:181] (0xc000164f20) (0xc000bc25a0) Stream added, broadcasting: 1\nI0915 10:40:33.782903 548 log.go:181] (0xc000164f20) Reply frame received for 1\nI0915 10:40:33.782936 548 log.go:181] (0xc000164f20) (0xc00062c000) Create stream\nI0915 10:40:33.782945 548 log.go:181] (0xc000164f20) (0xc00062c000) Stream added, broadcasting: 3\nI0915 10:40:33.784072 548 log.go:181] (0xc000164f20) Reply frame received for 3\nI0915 10:40:33.784123 548 log.go:181] (0xc000164f20) (0xc000bc2640) Create stream\nI0915 10:40:33.784229 548 log.go:181] (0xc000164f20) (0xc000bc2640) Stream added, broadcasting: 5\nI0915 10:40:33.785083 548 log.go:181] (0xc000164f20) Reply frame received for 5\nI0915 10:40:33.872590 548 log.go:181] (0xc000164f20) Data frame received for 5\nI0915 10:40:33.872614 548 log.go:181] (0xc000bc2640) (5) Data frame handling\nI0915 10:40:33.872628 548 log.go:181] (0xc000bc2640) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0915 10:40:33.873196 548 log.go:181] (0xc000164f20) Data frame received for 5\nI0915 10:40:33.873219 548 log.go:181] (0xc000bc2640) (5) Data frame handling\nI0915 10:40:33.873243 548 log.go:181] (0xc000bc2640) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0915 10:40:33.873384 548 log.go:181] (0xc000164f20) Data frame received for 5\nI0915 10:40:33.873410 548 log.go:181] (0xc000bc2640) (5) Data frame handling\nI0915 10:40:33.873433 548 log.go:181] (0xc000164f20) Data frame received for 3\nI0915 10:40:33.873450 548 log.go:181] (0xc00062c000) (3) Data frame handling\nI0915 10:40:33.875246 548 log.go:181] (0xc000164f20) Data frame received for 1\nI0915 10:40:33.875264 548 log.go:181] (0xc000bc25a0) (1) Data frame handling\nI0915 10:40:33.875278 548 log.go:181] (0xc000bc25a0) (1) Data frame sent\nI0915 10:40:33.875301 548 log.go:181] (0xc000164f20) (0xc000bc25a0) Stream removed, broadcasting: 1\nI0915 10:40:33.875314 548 log.go:181] (0xc000164f20) Go away received\nI0915 10:40:33.875619 548 log.go:181] (0xc000164f20) (0xc000bc25a0) Stream removed, broadcasting: 1\nI0915 10:40:33.875633 548 log.go:181] (0xc000164f20) (0xc00062c000) Stream removed, broadcasting: 3\nI0915 10:40:33.875639 548 log.go:181] (0xc000164f20) (0xc000bc2640) Stream removed, broadcasting: 5\n" Sep 15 10:40:33.880: INFO: stdout: "" Sep 15 10:40:33.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7586 execpod6kb29 -- /bin/sh -x -c nc -zv -t -w 2 10.103.211.194 80' Sep 15 10:40:34.110: INFO: stderr: "I0915 10:40:34.026987 566 log.go:181] (0xc000011760) (0xc000576960) Create stream\nI0915 10:40:34.027049 566 log.go:181] (0xc000011760) (0xc000576960) Stream added, broadcasting: 1\nI0915 10:40:34.033631 566 log.go:181] (0xc000011760) Reply frame received for 1\nI0915 10:40:34.033667 566 log.go:181] (0xc000011760) (0xc000576000) Create stream\nI0915 10:40:34.033681 566 log.go:181] (0xc000011760) (0xc000576000) Stream added, broadcasting: 3\nI0915 10:40:34.034557 566 log.go:181] (0xc000011760) Reply frame received for 3\nI0915 10:40:34.034584 566 log.go:181] (0xc000011760) (0xc000748000) Create stream\nI0915 10:40:34.034591 566 log.go:181] (0xc000011760) (0xc000748000) Stream added, broadcasting: 5\nI0915 10:40:34.035266 566 log.go:181] (0xc000011760) Reply frame received for 5\nI0915 10:40:34.104716 566 log.go:181] (0xc000011760) Data frame received for 3\nI0915 10:40:34.104756 566 log.go:181] (0xc000576000) (3) Data frame handling\nI0915 10:40:34.104781 566 log.go:181] (0xc000011760) Data frame received for 5\nI0915 10:40:34.104794 566 log.go:181] (0xc000748000) (5) Data frame handling\nI0915 10:40:34.104807 566 log.go:181] (0xc000748000) (5) Data frame sent\nI0915 10:40:34.104823 566 log.go:181] (0xc000011760) Data frame received for 5\nI0915 10:40:34.104833 566 log.go:181] (0xc000748000) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.211.194 80\nConnection to 10.103.211.194 80 port [tcp/http] succeeded!\nI0915 10:40:34.106093 566 log.go:181] (0xc000011760) Data frame received for 1\nI0915 10:40:34.106122 566 log.go:181] (0xc000576960) (1) Data frame handling\nI0915 10:40:34.106141 566 log.go:181] (0xc000576960) (1) Data frame sent\nI0915 10:40:34.106252 566 log.go:181] (0xc000011760) (0xc000576960) Stream removed, broadcasting: 1\nI0915 10:40:34.106292 566 log.go:181] (0xc000011760) Go away received\nI0915 10:40:34.106628 566 log.go:181] (0xc000011760) (0xc000576960) Stream removed, broadcasting: 1\nI0915 10:40:34.106643 566 log.go:181] (0xc000011760) (0xc000576000) Stream removed, broadcasting: 3\nI0915 10:40:34.106649 566 log.go:181] (0xc000011760) (0xc000748000) Stream removed, broadcasting: 5\n" Sep 15 10:40:34.110: INFO: stdout: "" Sep 15 10:40:34.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7586 execpod6kb29 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30130' Sep 15 10:40:34.325: INFO: stderr: "I0915 10:40:34.237574 584 log.go:181] (0xc000933130) (0xc000e86960) Create stream\nI0915 10:40:34.237659 584 log.go:181] (0xc000933130) (0xc000e86960) Stream added, broadcasting: 1\nI0915 10:40:34.243033 584 log.go:181] (0xc000933130) Reply frame received for 1\nI0915 10:40:34.243081 584 log.go:181] (0xc000933130) (0xc000ab01e0) Create stream\nI0915 10:40:34.243102 584 log.go:181] (0xc000933130) (0xc000ab01e0) Stream added, broadcasting: 3\nI0915 10:40:34.244005 584 log.go:181] (0xc000933130) Reply frame received for 3\nI0915 10:40:34.244026 584 log.go:181] (0xc000933130) (0xc000ab0460) Create stream\nI0915 10:40:34.244034 584 log.go:181] (0xc000933130) (0xc000ab0460) Stream added, broadcasting: 5\nI0915 10:40:34.244994 584 log.go:181] (0xc000933130) Reply frame received for 5\nI0915 10:40:34.319264 584 log.go:181] (0xc000933130) Data frame received for 5\nI0915 10:40:34.319298 584 log.go:181] (0xc000ab0460) (5) Data frame handling\nI0915 10:40:34.319319 584 log.go:181] (0xc000ab0460) (5) Data frame sent\nI0915 10:40:34.319329 584 log.go:181] (0xc000933130) Data frame received for 5\nI0915 10:40:34.319337 584 log.go:181] (0xc000ab0460) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30130\nConnection to 172.18.0.11 30130 port [tcp/30130] succeeded!\nI0915 10:40:34.319357 584 log.go:181] (0xc000ab0460) (5) Data frame sent\nI0915 10:40:34.319725 584 log.go:181] (0xc000933130) Data frame received for 3\nI0915 10:40:34.319738 584 log.go:181] (0xc000ab01e0) (3) Data frame handling\nI0915 10:40:34.319948 584 log.go:181] (0xc000933130) Data frame received for 5\nI0915 10:40:34.319977 584 log.go:181] (0xc000ab0460) (5) Data frame handling\nI0915 10:40:34.321538 584 log.go:181] (0xc000933130) Data frame received for 1\nI0915 10:40:34.321568 584 log.go:181] (0xc000e86960) (1) Data frame handling\nI0915 10:40:34.321597 584 log.go:181] (0xc000e86960) (1) Data frame sent\nI0915 10:40:34.321629 584 log.go:181] (0xc000933130) (0xc000e86960) Stream removed, broadcasting: 1\nI0915 10:40:34.321658 584 log.go:181] (0xc000933130) Go away received\nI0915 10:40:34.321998 584 log.go:181] (0xc000933130) (0xc000e86960) Stream removed, broadcasting: 1\nI0915 10:40:34.322012 584 log.go:181] (0xc000933130) (0xc000ab01e0) Stream removed, broadcasting: 3\nI0915 10:40:34.322019 584 log.go:181] (0xc000933130) (0xc000ab0460) Stream removed, broadcasting: 5\n" Sep 15 10:40:34.325: INFO: stdout: "" Sep 15 10:40:34.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7586 execpod6kb29 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30130' Sep 15 10:40:34.539: INFO: stderr: "I0915 10:40:34.448060 602 log.go:181] (0xc000add130) (0xc000ad4640) Create stream\nI0915 10:40:34.448105 602 log.go:181] (0xc000add130) (0xc000ad4640) Stream added, broadcasting: 1\nI0915 10:40:34.457050 602 log.go:181] (0xc000add130) Reply frame received for 1\nI0915 10:40:34.457123 602 log.go:181] (0xc000add130) (0xc000578000) Create stream\nI0915 10:40:34.457140 602 log.go:181] (0xc000add130) (0xc000578000) Stream added, broadcasting: 3\nI0915 10:40:34.458346 602 log.go:181] (0xc000add130) Reply frame received for 3\nI0915 10:40:34.458402 602 log.go:181] (0xc000add130) (0xc0004f1400) Create stream\nI0915 10:40:34.458423 602 log.go:181] (0xc000add130) (0xc0004f1400) Stream added, broadcasting: 5\nI0915 10:40:34.459527 602 log.go:181] (0xc000add130) Reply frame received for 5\nI0915 10:40:34.531133 602 log.go:181] (0xc000add130) Data frame received for 5\nI0915 10:40:34.531177 602 log.go:181] (0xc0004f1400) (5) Data frame handling\nI0915 10:40:34.531201 602 log.go:181] (0xc0004f1400) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 30130\nConnection to 172.18.0.12 30130 port [tcp/30130] succeeded!\nI0915 10:40:34.531308 602 log.go:181] (0xc000add130) Data frame received for 3\nI0915 10:40:34.531356 602 log.go:181] (0xc000578000) (3) Data frame handling\nI0915 10:40:34.531386 602 log.go:181] (0xc000add130) Data frame received for 5\nI0915 10:40:34.531407 602 log.go:181] (0xc0004f1400) (5) Data frame handling\nI0915 10:40:34.533511 602 log.go:181] (0xc000add130) Data frame received for 1\nI0915 10:40:34.533535 602 log.go:181] (0xc000ad4640) (1) Data frame handling\nI0915 10:40:34.533549 602 log.go:181] (0xc000ad4640) (1) Data frame sent\nI0915 10:40:34.533571 602 log.go:181] (0xc000add130) (0xc000ad4640) Stream removed, broadcasting: 1\nI0915 10:40:34.533600 602 log.go:181] (0xc000add130) Go away received\nI0915 10:40:34.534173 602 log.go:181] (0xc000add130) (0xc000ad4640) Stream removed, broadcasting: 1\nI0915 10:40:34.534200 602 log.go:181] (0xc000add130) (0xc000578000) Stream removed, broadcasting: 3\nI0915 10:40:34.534214 602 log.go:181] (0xc000add130) (0xc0004f1400) Stream removed, broadcasting: 5\n" Sep 15 10:40:34.539: INFO: stdout: "" Sep 15 10:40:34.539: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:40:34.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7586" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.511 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":54,"skipped":933,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:40:34.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 10:40:35.217: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 10:40:37.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763235, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763235, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763235, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735763235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 10:40:40.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:40:40.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9803" for this suite. STEP: Destroying namespace "webhook-9803-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.299 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":55,"skipped":939,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:40:40.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:40:40.978: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:40:42.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2538" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":56,"skipped":942,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:40:42.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-lhwb STEP: Creating a pod to test atomic-volume-subpath Sep 15 10:40:42.767: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lhwb" in namespace "subpath-742" to be "Succeeded or Failed" Sep 15 10:40:42.770: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.748924ms Sep 15 10:40:44.775: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007670034s Sep 15 10:40:46.781: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 4.013114729s Sep 15 10:40:48.785: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 6.017782942s Sep 15 10:40:50.790: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 8.022237609s Sep 15 10:40:52.793: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 10.025879003s Sep 15 10:40:54.798: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 12.030404811s Sep 15 10:40:56.802: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 14.034345937s Sep 15 10:40:58.806: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 16.038465176s Sep 15 10:41:00.811: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 18.043822105s Sep 15 10:41:02.817: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 20.049098021s Sep 15 10:41:04.822: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Running", Reason="", readiness=true. Elapsed: 22.054517291s Sep 15 10:41:06.826: INFO: Pod "pod-subpath-test-secret-lhwb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058773501s STEP: Saw pod success Sep 15 10:41:06.826: INFO: Pod "pod-subpath-test-secret-lhwb" satisfied condition "Succeeded or Failed" Sep 15 10:41:06.828: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-lhwb container test-container-subpath-secret-lhwb: STEP: delete the pod Sep 15 10:41:06.878: INFO: Waiting for pod pod-subpath-test-secret-lhwb to disappear Sep 15 10:41:06.892: INFO: Pod pod-subpath-test-secret-lhwb no longer exists STEP: Deleting pod pod-subpath-test-secret-lhwb Sep 15 10:41:06.892: INFO: Deleting pod "pod-subpath-test-secret-lhwb" in namespace "subpath-742" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:41:06.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-742" for this suite. • [SLOW TEST:24.670 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":57,"skipped":943,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:41:06.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-5e968d8f-e643-475c-8cb2-326a9085a648 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:41:13.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3267" for this suite. • [SLOW TEST:6.154 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":58,"skipped":947,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:41:13.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1fedc703-d9e0-4f28-9357-25c0e32d781a STEP: Creating a pod to test consume secrets Sep 15 10:41:13.885: INFO: Waiting up to 5m0s for pod "pod-secrets-18d90f29-8148-4b5c-811d-dcb90dfde866" in namespace "secrets-8000" to be "Succeeded or Failed" Sep 15 10:41:13.966: INFO: Pod "pod-secrets-18d90f29-8148-4b5c-811d-dcb90dfde866": Phase="Pending", Reason="", readiness=false. Elapsed: 80.686072ms Sep 15 10:41:16.020: INFO: Pod "pod-secrets-18d90f29-8148-4b5c-811d-dcb90dfde866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134865652s Sep 15 10:41:18.025: INFO: Pod "pod-secrets-18d90f29-8148-4b5c-811d-dcb90dfde866": Phase="Running", Reason="", readiness=true. Elapsed: 4.139531482s Sep 15 10:41:20.030: INFO: Pod "pod-secrets-18d90f29-8148-4b5c-811d-dcb90dfde866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.144338165s STEP: Saw pod success Sep 15 10:41:20.030: INFO: Pod "pod-secrets-18d90f29-8148-4b5c-811d-dcb90dfde866" satisfied condition "Succeeded or Failed" Sep 15 10:41:20.033: INFO: Trying to get logs from node kali-worker pod pod-secrets-18d90f29-8148-4b5c-811d-dcb90dfde866 container secret-volume-test: STEP: delete the pod Sep 15 10:41:20.099: INFO: Waiting for pod pod-secrets-18d90f29-8148-4b5c-811d-dcb90dfde866 to disappear Sep 15 10:41:20.106: INFO: Pod pod-secrets-18d90f29-8148-4b5c-811d-dcb90dfde866 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:41:20.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8000" for this suite. • [SLOW TEST:7.038 seconds] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":955,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:41:20.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 10:41:20.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-473d8410-20dd-47ad-9f51-1e7969cc79c6" in namespace "downward-api-9580" to be "Succeeded or Failed" Sep 15 10:41:20.406: INFO: Pod "downwardapi-volume-473d8410-20dd-47ad-9f51-1e7969cc79c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.534367ms Sep 15 10:41:22.698: INFO: Pod "downwardapi-volume-473d8410-20dd-47ad-9f51-1e7969cc79c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295171478s Sep 15 10:41:24.703: INFO: Pod "downwardapi-volume-473d8410-20dd-47ad-9f51-1e7969cc79c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.30085343s STEP: Saw pod success Sep 15 10:41:24.704: INFO: Pod "downwardapi-volume-473d8410-20dd-47ad-9f51-1e7969cc79c6" satisfied condition "Succeeded or Failed" Sep 15 10:41:24.707: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-473d8410-20dd-47ad-9f51-1e7969cc79c6 container client-container: STEP: delete the pod Sep 15 10:41:24.825: INFO: Waiting for pod downwardapi-volume-473d8410-20dd-47ad-9f51-1e7969cc79c6 to disappear Sep 15 10:41:24.887: INFO: Pod downwardapi-volume-473d8410-20dd-47ad-9f51-1e7969cc79c6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:41:24.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9580" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":958,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:41:25.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 15 10:41:29.462: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:41:29.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2754" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":61,"skipped":973,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:41:29.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 15 10:41:29.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4287' Sep 15 10:41:30.239: INFO: stderr: "" Sep 15 10:41:30.239: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 15 10:41:30.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4287' Sep 15 10:41:30.385: INFO: stderr: "" Sep 15 10:41:30.385: INFO: stdout: "update-demo-nautilus-bcnlh update-demo-nautilus-lfxc7 " Sep 15 10:41:30.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bcnlh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:30.524: INFO: stderr: "" Sep 15 10:41:30.524: INFO: stdout: "" Sep 15 10:41:30.524: INFO: update-demo-nautilus-bcnlh is created but not running Sep 15 10:41:35.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4287' Sep 15 10:41:35.630: INFO: stderr: "" Sep 15 10:41:35.630: INFO: stdout: "update-demo-nautilus-bcnlh update-demo-nautilus-lfxc7 " Sep 15 10:41:35.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bcnlh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:35.776: INFO: stderr: "" Sep 15 10:41:35.776: INFO: stdout: "true" Sep 15 10:41:35.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bcnlh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:35.887: INFO: stderr: "" Sep 15 10:41:35.887: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 15 10:41:35.887: INFO: validating pod update-demo-nautilus-bcnlh Sep 15 10:41:35.892: INFO: got data: { "image": "nautilus.jpg" } Sep 15 10:41:35.892: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 15 10:41:35.892: INFO: update-demo-nautilus-bcnlh is verified up and running Sep 15 10:41:35.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lfxc7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:35.991: INFO: stderr: "" Sep 15 10:41:35.991: INFO: stdout: "true" Sep 15 10:41:35.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lfxc7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:36.099: INFO: stderr: "" Sep 15 10:41:36.099: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 15 10:41:36.099: INFO: validating pod update-demo-nautilus-lfxc7 Sep 15 10:41:36.104: INFO: got data: { "image": "nautilus.jpg" } Sep 15 10:41:36.104: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 15 10:41:36.104: INFO: update-demo-nautilus-lfxc7 is verified up and running STEP: scaling down the replication controller Sep 15 10:41:36.106: INFO: scanned /root for discovery docs: Sep 15 10:41:36.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4287' Sep 15 10:41:37.241: INFO: stderr: "" Sep 15 10:41:37.241: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 15 10:41:37.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4287' Sep 15 10:41:37.362: INFO: stderr: "" Sep 15 10:41:37.362: INFO: stdout: "update-demo-nautilus-bcnlh update-demo-nautilus-lfxc7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 15 10:41:42.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4287' Sep 15 10:41:42.480: INFO: stderr: "" Sep 15 10:41:42.480: INFO: stdout: "update-demo-nautilus-lfxc7 " Sep 15 10:41:42.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lfxc7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:42.621: INFO: stderr: "" Sep 15 10:41:42.621: INFO: stdout: "true" Sep 15 10:41:42.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lfxc7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:42.708: INFO: stderr: "" Sep 15 10:41:42.708: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 15 10:41:42.708: INFO: validating pod update-demo-nautilus-lfxc7 Sep 15 10:41:42.711: INFO: got data: { "image": "nautilus.jpg" } Sep 15 10:41:42.711: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 15 10:41:42.711: INFO: update-demo-nautilus-lfxc7 is verified up and running STEP: scaling up the replication controller Sep 15 10:41:42.714: INFO: scanned /root for discovery docs: Sep 15 10:41:42.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4287' Sep 15 10:41:43.894: INFO: stderr: "" Sep 15 10:41:43.894: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 15 10:41:43.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4287' Sep 15 10:41:43.998: INFO: stderr: "" Sep 15 10:41:43.998: INFO: stdout: "update-demo-nautilus-2xl2h update-demo-nautilus-lfxc7 " Sep 15 10:41:43.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2xl2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:44.129: INFO: stderr: "" Sep 15 10:41:44.129: INFO: stdout: "" Sep 15 10:41:44.129: INFO: update-demo-nautilus-2xl2h is created but not running Sep 15 10:41:49.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4287' Sep 15 10:41:49.237: INFO: stderr: "" Sep 15 10:41:49.237: INFO: stdout: "update-demo-nautilus-2xl2h update-demo-nautilus-lfxc7 " Sep 15 10:41:49.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2xl2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:49.333: INFO: stderr: "" Sep 15 10:41:49.333: INFO: stdout: "true" Sep 15 10:41:49.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2xl2h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:49.440: INFO: stderr: "" Sep 15 10:41:49.440: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 15 10:41:49.440: INFO: validating pod update-demo-nautilus-2xl2h Sep 15 10:41:49.445: INFO: got data: { "image": "nautilus.jpg" } Sep 15 10:41:49.445: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 15 10:41:49.445: INFO: update-demo-nautilus-2xl2h is verified up and running Sep 15 10:41:49.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lfxc7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:49.549: INFO: stderr: "" Sep 15 10:41:49.549: INFO: stdout: "true" Sep 15 10:41:49.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lfxc7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4287' Sep 15 10:41:49.818: INFO: stderr: "" Sep 15 10:41:49.818: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 15 10:41:49.818: INFO: validating pod update-demo-nautilus-lfxc7 Sep 15 10:41:49.822: INFO: got data: { "image": "nautilus.jpg" } Sep 15 10:41:49.822: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 15 10:41:49.822: INFO: update-demo-nautilus-lfxc7 is verified up and running STEP: using delete to clean up resources Sep 15 10:41:49.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4287' Sep 15 10:41:49.928: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 15 10:41:49.928: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 15 10:41:49.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4287' Sep 15 10:41:50.053: INFO: stderr: "No resources found in kubectl-4287 namespace.\n" Sep 15 10:41:50.053: INFO: stdout: "" Sep 15 10:41:50.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4287 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 15 10:41:50.302: INFO: stderr: "" Sep 15 10:41:50.302: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:41:50.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4287" for this suite. • [SLOW TEST:20.675 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":62,"skipped":1013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:41:50.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Sep 15 10:41:54.760: INFO: Pod pod-hostip-2cb0a703-ec70-48b0-8f6a-4471626e24cb has hostIP: 172.18.0.12 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:41:54.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2201" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":63,"skipped":1077,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:41:54.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 10:41:54.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4cdcede-189b-4555-ac1c-c7b70c0d220a" in namespace "downward-api-3181" to be "Succeeded or Failed" Sep 15 10:41:54.847: INFO: Pod "downwardapi-volume-d4cdcede-189b-4555-ac1c-c7b70c0d220a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.150416ms Sep 15 10:41:56.852: INFO: Pod "downwardapi-volume-d4cdcede-189b-4555-ac1c-c7b70c0d220a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016727451s Sep 15 10:41:58.901: INFO: Pod "downwardapi-volume-d4cdcede-189b-4555-ac1c-c7b70c0d220a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065913539s STEP: Saw pod success Sep 15 10:41:58.901: INFO: Pod "downwardapi-volume-d4cdcede-189b-4555-ac1c-c7b70c0d220a" satisfied condition "Succeeded or Failed" Sep 15 10:41:58.905: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d4cdcede-189b-4555-ac1c-c7b70c0d220a container client-container: STEP: delete the pod Sep 15 10:41:58.942: INFO: Waiting for pod downwardapi-volume-d4cdcede-189b-4555-ac1c-c7b70c0d220a to disappear Sep 15 10:41:58.952: INFO: Pod downwardapi-volume-d4cdcede-189b-4555-ac1c-c7b70c0d220a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:41:58.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3181" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":1099,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:41:58.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Sep 15 10:41:59.060: INFO: >>> kubeConfig: /root/.kube/config Sep 15 10:42:02.045: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:42:12.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-957" for this suite. • [SLOW TEST:14.006 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":65,"skipped":1112,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:42:12.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:42:13.022: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 15 10:42:16.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6292 create -f -' Sep 15 10:42:19.741: INFO: stderr: "" Sep 15 10:42:19.741: INFO: stdout: "e2e-test-crd-publish-openapi-9540-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 15 10:42:19.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6292 delete e2e-test-crd-publish-openapi-9540-crds test-cr' Sep 15 10:42:19.864: INFO: stderr: "" Sep 15 10:42:19.864: INFO: stdout: "e2e-test-crd-publish-openapi-9540-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Sep 15 10:42:19.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6292 apply -f -' Sep 15 10:42:20.183: INFO: stderr: "" Sep 15 10:42:20.183: INFO: stdout: "e2e-test-crd-publish-openapi-9540-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 15 10:42:20.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6292 delete e2e-test-crd-publish-openapi-9540-crds test-cr' Sep 15 10:42:20.355: INFO: stderr: "" Sep 15 10:42:20.355: INFO: stdout: "e2e-test-crd-publish-openapi-9540-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 15 10:42:20.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9540-crds' Sep 15 10:42:20.669: INFO: stderr: "" Sep 15 10:42:20.669: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9540-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:42:23.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6292" for this suite. • [SLOW TEST:10.724 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":66,"skipped":1134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:42:23.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2851 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2851 to expose endpoints map[] Sep 15 10:42:23.834: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Sep 15 10:42:24.843: INFO: successfully validated that service endpoint-test2 in namespace services-2851 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2851 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2851 to expose endpoints map[pod1:[80]] Sep 15 10:42:29.024: INFO: successfully validated that service endpoint-test2 in namespace services-2851 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2851 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2851 to expose endpoints map[pod1:[80] pod2:[80]] Sep 15 10:42:33.089: INFO: successfully validated that service endpoint-test2 in namespace services-2851 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2851 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2851 to expose endpoints map[pod2:[80]] Sep 15 10:42:33.133: INFO: successfully validated that service endpoint-test2 in namespace services-2851 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2851 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2851 to expose endpoints map[] Sep 15 10:42:34.187: INFO: successfully validated that service endpoint-test2 in namespace services-2851 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:42:34.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2851" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.690 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":67,"skipped":1187,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:42:34.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 15 10:42:34.577: INFO: Waiting up to 5m0s for pod "pod-024e3316-7059-44ec-a149-cd14aa1a1479" in namespace "emptydir-6780" to be "Succeeded or Failed" Sep 15 10:42:34.598: INFO: Pod "pod-024e3316-7059-44ec-a149-cd14aa1a1479": Phase="Pending", Reason="", readiness=false. Elapsed: 20.644156ms Sep 15 10:42:36.609: INFO: Pod "pod-024e3316-7059-44ec-a149-cd14aa1a1479": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031619276s Sep 15 10:42:38.619: INFO: Pod "pod-024e3316-7059-44ec-a149-cd14aa1a1479": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042119169s STEP: Saw pod success Sep 15 10:42:38.619: INFO: Pod "pod-024e3316-7059-44ec-a149-cd14aa1a1479" satisfied condition "Succeeded or Failed" Sep 15 10:42:38.622: INFO: Trying to get logs from node kali-worker pod pod-024e3316-7059-44ec-a149-cd14aa1a1479 container test-container: STEP: delete the pod Sep 15 10:42:38.933: INFO: Waiting for pod pod-024e3316-7059-44ec-a149-cd14aa1a1479 to disappear Sep 15 10:42:38.966: INFO: Pod pod-024e3316-7059-44ec-a149-cd14aa1a1479 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:42:38.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6780" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":68,"skipped":1204,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:42:38.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:42:39.255: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Sep 15 10:42:39.294: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:39.315: INFO: Number of nodes with available pods: 0 Sep 15 10:42:39.315: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:42:40.320: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:40.323: INFO: Number of nodes with available pods: 0 Sep 15 10:42:40.323: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:42:41.567: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:41.570: INFO: Number of nodes with available pods: 0 Sep 15 10:42:41.570: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:42:42.319: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:42.323: INFO: Number of nodes with available pods: 0 Sep 15 10:42:42.323: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:42:43.333: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:43.352: INFO: Number of nodes with available pods: 0 Sep 15 10:42:43.352: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:42:44.319: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:44.322: INFO: Number of nodes with available pods: 2 Sep 15 10:42:44.322: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Sep 15 10:42:44.382: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:44.382: INFO: Wrong image for pod: daemon-set-ttmt5. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:44.429: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:45.435: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:45.435: INFO: Wrong image for pod: daemon-set-ttmt5. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:45.439: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:46.433: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:46.433: INFO: Wrong image for pod: daemon-set-ttmt5. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:46.436: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:47.435: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:47.435: INFO: Wrong image for pod: daemon-set-ttmt5. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:47.439: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:48.435: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:48.435: INFO: Wrong image for pod: daemon-set-ttmt5. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:48.435: INFO: Pod daemon-set-ttmt5 is not available Sep 15 10:42:48.439: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:49.434: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:49.434: INFO: Wrong image for pod: daemon-set-ttmt5. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:49.434: INFO: Pod daemon-set-ttmt5 is not available Sep 15 10:42:49.438: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:50.458: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:50.458: INFO: Wrong image for pod: daemon-set-ttmt5. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:50.458: INFO: Pod daemon-set-ttmt5 is not available Sep 15 10:42:50.462: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:51.435: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:51.435: INFO: Wrong image for pod: daemon-set-ttmt5. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:51.435: INFO: Pod daemon-set-ttmt5 is not available Sep 15 10:42:51.439: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:52.477: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:52.477: INFO: Wrong image for pod: daemon-set-ttmt5. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:52.477: INFO: Pod daemon-set-ttmt5 is not available Sep 15 10:42:52.481: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:53.482: INFO: Pod daemon-set-6kdxb is not available Sep 15 10:42:53.482: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:53.526: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:54.471: INFO: Pod daemon-set-6kdxb is not available Sep 15 10:42:54.471: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:54.474: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:55.434: INFO: Pod daemon-set-6kdxb is not available Sep 15 10:42:55.434: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:55.438: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:56.446: INFO: Pod daemon-set-6kdxb is not available Sep 15 10:42:56.446: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:56.450: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:57.433: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:57.437: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:58.434: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:58.434: INFO: Pod daemon-set-jq4vs is not available Sep 15 10:42:58.437: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:42:59.435: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:42:59.435: INFO: Pod daemon-set-jq4vs is not available Sep 15 10:42:59.440: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:43:00.434: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:43:00.434: INFO: Pod daemon-set-jq4vs is not available Sep 15 10:43:00.439: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:43:01.435: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:43:01.435: INFO: Pod daemon-set-jq4vs is not available Sep 15 10:43:01.439: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:43:02.446: INFO: Wrong image for pod: daemon-set-jq4vs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 15 10:43:02.446: INFO: Pod daemon-set-jq4vs is not available Sep 15 10:43:02.451: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:43:03.449: INFO: Pod daemon-set-ml226 is not available Sep 15 10:43:03.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Sep 15 10:43:03.484: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:43:03.490: INFO: Number of nodes with available pods: 1 Sep 15 10:43:03.490: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:43:04.525: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:43:04.528: INFO: Number of nodes with available pods: 1 Sep 15 10:43:04.528: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:43:05.495: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:43:05.499: INFO: Number of nodes with available pods: 1 Sep 15 10:43:05.499: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:43:06.496: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:43:06.500: INFO: Number of nodes with available pods: 1 Sep 15 10:43:06.500: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:43:07.496: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:43:07.499: INFO: Number of nodes with available pods: 2 Sep 15 10:43:07.499: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4215, will wait for the garbage collector to delete the pods Sep 15 10:43:07.573: INFO: Deleting DaemonSet.extensions daemon-set took: 7.441445ms Sep 15 10:43:08.073: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.27011ms Sep 15 10:43:13.277: INFO: Number of nodes with available pods: 0 Sep 15 10:43:13.277: INFO: Number of running nodes: 0, number of available pods: 0 Sep 15 10:43:13.279: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4215/daemonsets","resourceVersion":"435874"},"items":null} Sep 15 10:43:13.282: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4215/pods","resourceVersion":"435874"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:43:13.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4215" for this suite. • [SLOW TEST:34.322 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":69,"skipped":1218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:43:13.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-c9fb915e-9765-4f7a-a0a1-4df8f09ad3a6 in namespace container-probe-3786 Sep 15 10:43:17.416: INFO: Started pod liveness-c9fb915e-9765-4f7a-a0a1-4df8f09ad3a6 in namespace container-probe-3786 STEP: checking the pod's current state and verifying that restartCount is present Sep 15 10:43:17.419: INFO: Initial restart count of pod liveness-c9fb915e-9765-4f7a-a0a1-4df8f09ad3a6 is 0 Sep 15 10:43:39.995: INFO: Restart count of pod container-probe-3786/liveness-c9fb915e-9765-4f7a-a0a1-4df8f09ad3a6 is now 1 (22.575816937s elapsed) Sep 15 10:43:58.029: INFO: Restart count of pod container-probe-3786/liveness-c9fb915e-9765-4f7a-a0a1-4df8f09ad3a6 is now 2 (40.610035261s elapsed) Sep 15 10:44:18.068: INFO: Restart count of pod container-probe-3786/liveness-c9fb915e-9765-4f7a-a0a1-4df8f09ad3a6 is now 3 (1m0.64948177s elapsed) Sep 15 10:44:38.194: INFO: Restart count of pod container-probe-3786/liveness-c9fb915e-9765-4f7a-a0a1-4df8f09ad3a6 is now 4 (1m20.775419207s elapsed) Sep 15 10:45:42.348: INFO: Restart count of pod container-probe-3786/liveness-c9fb915e-9765-4f7a-a0a1-4df8f09ad3a6 is now 5 (2m24.928737496s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:45:42.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3786" for this suite. • [SLOW TEST:149.071 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":70,"skipped":1265,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:45:42.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:45:55.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-808" for this suite. • [SLOW TEST:13.248 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":71,"skipped":1267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:45:55.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 15 10:45:55.780: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:45:55.794: INFO: Number of nodes with available pods: 0 Sep 15 10:45:55.794: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:45:56.801: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:45:56.804: INFO: Number of nodes with available pods: 0 Sep 15 10:45:56.804: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:45:58.018: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:45:58.023: INFO: Number of nodes with available pods: 0 Sep 15 10:45:58.023: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:45:58.800: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:45:58.804: INFO: Number of nodes with available pods: 0 Sep 15 10:45:58.804: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:45:59.823: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:45:59.826: INFO: Number of nodes with available pods: 0 Sep 15 10:45:59.826: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:00.798: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:00.801: INFO: Number of nodes with available pods: 1 Sep 15 10:46:00.801: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 10:46:01.801: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:01.804: INFO: Number of nodes with available pods: 2 Sep 15 10:46:01.804: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Sep 15 10:46:01.822: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:01.825: INFO: Number of nodes with available pods: 1 Sep 15 10:46:01.825: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:02.829: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:02.832: INFO: Number of nodes with available pods: 1 Sep 15 10:46:02.832: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:03.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:03.835: INFO: Number of nodes with available pods: 1 Sep 15 10:46:03.835: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:04.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:04.833: INFO: Number of nodes with available pods: 1 Sep 15 10:46:04.833: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:05.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:05.834: INFO: Number of nodes with available pods: 1 Sep 15 10:46:05.834: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:06.829: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:06.832: INFO: Number of nodes with available pods: 1 Sep 15 10:46:06.832: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:07.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:07.836: INFO: Number of nodes with available pods: 1 Sep 15 10:46:07.836: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:08.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:08.835: INFO: Number of nodes with available pods: 1 Sep 15 10:46:08.835: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:09.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:09.835: INFO: Number of nodes with available pods: 1 Sep 15 10:46:09.835: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:10.832: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:10.835: INFO: Number of nodes with available pods: 1 Sep 15 10:46:10.835: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:11.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:11.834: INFO: Number of nodes with available pods: 1 Sep 15 10:46:11.834: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:12.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:12.833: INFO: Number of nodes with available pods: 1 Sep 15 10:46:12.833: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:13.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:13.833: INFO: Number of nodes with available pods: 1 Sep 15 10:46:13.833: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:14.829: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:14.832: INFO: Number of nodes with available pods: 1 Sep 15 10:46:14.832: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:15.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:15.835: INFO: Number of nodes with available pods: 1 Sep 15 10:46:15.835: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:17.213: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:17.809: INFO: Number of nodes with available pods: 1 Sep 15 10:46:17.809: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:17.868: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:17.903: INFO: Number of nodes with available pods: 1 Sep 15 10:46:17.903: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:18.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:18.835: INFO: Number of nodes with available pods: 1 Sep 15 10:46:18.835: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:19.832: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:19.834: INFO: Number of nodes with available pods: 1 Sep 15 10:46:19.834: INFO: Node kali-worker is running more than one daemon pod Sep 15 10:46:20.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 10:46:20.836: INFO: Number of nodes with available pods: 2 Sep 15 10:46:20.836: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9266, will wait for the garbage collector to delete the pods Sep 15 10:46:20.899: INFO: Deleting DaemonSet.extensions daemon-set took: 6.746562ms Sep 15 10:46:21.300: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.772204ms Sep 15 10:46:33.303: INFO: Number of nodes with available pods: 0 Sep 15 10:46:33.303: INFO: Number of running nodes: 0, number of available pods: 0 Sep 15 10:46:33.306: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9266/daemonsets","resourceVersion":"436708"},"items":null} Sep 15 10:46:33.308: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9266/pods","resourceVersion":"436708"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:46:33.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9266" for this suite. • [SLOW TEST:37.701 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":72,"skipped":1341,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:46:33.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:46:38.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2288" for this suite. • [SLOW TEST:5.156 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":73,"skipped":1347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:46:38.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 15 10:46:38.561: INFO: Waiting up to 5m0s for pod "downward-api-f3355c8e-e903-40d2-ad1f-2e2d6faee787" in namespace "downward-api-4797" to be "Succeeded or Failed" Sep 15 10:46:38.572: INFO: Pod "downward-api-f3355c8e-e903-40d2-ad1f-2e2d6faee787": Phase="Pending", Reason="", readiness=false. Elapsed: 11.687471ms Sep 15 10:46:40.578: INFO: Pod "downward-api-f3355c8e-e903-40d2-ad1f-2e2d6faee787": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017051729s Sep 15 10:46:42.583: INFO: Pod "downward-api-f3355c8e-e903-40d2-ad1f-2e2d6faee787": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022209936s STEP: Saw pod success Sep 15 10:46:42.583: INFO: Pod "downward-api-f3355c8e-e903-40d2-ad1f-2e2d6faee787" satisfied condition "Succeeded or Failed" Sep 15 10:46:42.586: INFO: Trying to get logs from node kali-worker2 pod downward-api-f3355c8e-e903-40d2-ad1f-2e2d6faee787 container dapi-container: STEP: delete the pod Sep 15 10:46:42.633: INFO: Waiting for pod downward-api-f3355c8e-e903-40d2-ad1f-2e2d6faee787 to disappear Sep 15 10:46:42.640: INFO: Pod downward-api-f3355c8e-e903-40d2-ad1f-2e2d6faee787 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:46:42.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4797" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":74,"skipped":1370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:46:42.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-cc13ecf6-b953-454f-96f2-db53cebc293e in namespace container-probe-9513 Sep 15 10:46:46.738: INFO: Started pod test-webserver-cc13ecf6-b953-454f-96f2-db53cebc293e in namespace container-probe-9513 STEP: checking the pod's current state and verifying that restartCount is present Sep 15 10:46:46.741: INFO: Initial restart count of pod test-webserver-cc13ecf6-b953-454f-96f2-db53cebc293e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:50:47.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9513" for this suite. • [SLOW TEST:244.828 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":75,"skipped":1397,"failed":0} [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:50:47.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:50:47.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1746" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":76,"skipped":1397,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:50:47.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 15 10:50:48.056: INFO: Waiting up to 5m0s for pod "pod-1688d025-94b4-4ad4-ba51-dc8e521d5b6a" in namespace "emptydir-6107" to be "Succeeded or Failed" Sep 15 10:50:48.134: INFO: Pod "pod-1688d025-94b4-4ad4-ba51-dc8e521d5b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 77.805628ms Sep 15 10:50:50.138: INFO: Pod "pod-1688d025-94b4-4ad4-ba51-dc8e521d5b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081921291s Sep 15 10:50:52.143: INFO: Pod "pod-1688d025-94b4-4ad4-ba51-dc8e521d5b6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086441343s STEP: Saw pod success Sep 15 10:50:52.143: INFO: Pod "pod-1688d025-94b4-4ad4-ba51-dc8e521d5b6a" satisfied condition "Succeeded or Failed" Sep 15 10:50:52.146: INFO: Trying to get logs from node kali-worker pod pod-1688d025-94b4-4ad4-ba51-dc8e521d5b6a container test-container: STEP: delete the pod Sep 15 10:50:52.195: INFO: Waiting for pod pod-1688d025-94b4-4ad4-ba51-dc8e521d5b6a to disappear Sep 15 10:50:52.222: INFO: Pod pod-1688d025-94b4-4ad4-ba51-dc8e521d5b6a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:50:52.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6107" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":77,"skipped":1409,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:50:52.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 15 10:50:52.292: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:51:01.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2553" for this suite. • [SLOW TEST:9.140 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":78,"skipped":1426,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:51:01.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Sep 15 10:51:01.433: INFO: Waiting up to 5m0s for pod "var-expansion-5a0b0123-9356-49ef-a2ed-d43fb2db7d0d" in namespace "var-expansion-1074" to be "Succeeded or Failed" Sep 15 10:51:01.437: INFO: Pod "var-expansion-5a0b0123-9356-49ef-a2ed-d43fb2db7d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.857425ms Sep 15 10:51:03.441: INFO: Pod "var-expansion-5a0b0123-9356-49ef-a2ed-d43fb2db7d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008476725s Sep 15 10:51:05.447: INFO: Pod "var-expansion-5a0b0123-9356-49ef-a2ed-d43fb2db7d0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0136907s STEP: Saw pod success Sep 15 10:51:05.447: INFO: Pod "var-expansion-5a0b0123-9356-49ef-a2ed-d43fb2db7d0d" satisfied condition "Succeeded or Failed" Sep 15 10:51:05.450: INFO: Trying to get logs from node kali-worker2 pod var-expansion-5a0b0123-9356-49ef-a2ed-d43fb2db7d0d container dapi-container: STEP: delete the pod Sep 15 10:51:05.548: INFO: Waiting for pod var-expansion-5a0b0123-9356-49ef-a2ed-d43fb2db7d0d to disappear Sep 15 10:51:05.557: INFO: Pod var-expansion-5a0b0123-9356-49ef-a2ed-d43fb2db7d0d no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:51:05.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1074" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":79,"skipped":1434,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:51:05.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Sep 15 10:51:06.376: INFO: Pod name wrapped-volume-race-26ddc686-56b2-48ab-a3c4-fdb6ea0fa9b7: Found 0 pods out of 5 Sep 15 10:51:11.385: INFO: Pod name wrapped-volume-race-26ddc686-56b2-48ab-a3c4-fdb6ea0fa9b7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-26ddc686-56b2-48ab-a3c4-fdb6ea0fa9b7 in namespace emptydir-wrapper-273, will wait for the garbage collector to delete the pods Sep 15 10:51:25.487: INFO: Deleting ReplicationController wrapped-volume-race-26ddc686-56b2-48ab-a3c4-fdb6ea0fa9b7 took: 26.692ms Sep 15 10:51:25.987: INFO: Terminating ReplicationController wrapped-volume-race-26ddc686-56b2-48ab-a3c4-fdb6ea0fa9b7 pods took: 500.216011ms STEP: Creating RC which spawns configmap-volume pods Sep 15 10:51:43.335: INFO: Pod name wrapped-volume-race-7803d772-d27a-4cd0-8f69-f71f7074917e: Found 0 pods out of 5 Sep 15 10:51:48.344: INFO: Pod name wrapped-volume-race-7803d772-d27a-4cd0-8f69-f71f7074917e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7803d772-d27a-4cd0-8f69-f71f7074917e in namespace emptydir-wrapper-273, will wait for the garbage collector to delete the pods Sep 15 10:52:02.427: INFO: Deleting ReplicationController wrapped-volume-race-7803d772-d27a-4cd0-8f69-f71f7074917e took: 8.408703ms Sep 15 10:52:02.928: INFO: Terminating ReplicationController wrapped-volume-race-7803d772-d27a-4cd0-8f69-f71f7074917e pods took: 500.356957ms STEP: Creating RC which spawns configmap-volume pods Sep 15 10:52:13.364: INFO: Pod name wrapped-volume-race-244d3383-f6c1-4183-8170-776492b5d495: Found 0 pods out of 5 Sep 15 10:52:18.372: INFO: Pod name wrapped-volume-race-244d3383-f6c1-4183-8170-776492b5d495: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-244d3383-f6c1-4183-8170-776492b5d495 in namespace emptydir-wrapper-273, will wait for the garbage collector to delete the pods Sep 15 10:52:34.458: INFO: Deleting ReplicationController wrapped-volume-race-244d3383-f6c1-4183-8170-776492b5d495 took: 7.38744ms Sep 15 10:52:34.958: INFO: Terminating ReplicationController wrapped-volume-race-244d3383-f6c1-4183-8170-776492b5d495 pods took: 500.235712ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:52:44.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-273" for this suite. • [SLOW TEST:98.629 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":80,"skipped":1455,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:52:44.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-nmsb STEP: Creating a pod to test atomic-volume-subpath Sep 15 10:52:44.280: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nmsb" in namespace "subpath-9485" to be "Succeeded or Failed" Sep 15 10:52:44.302: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.853815ms Sep 15 10:52:46.327: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046934473s Sep 15 10:52:48.332: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 4.051895291s Sep 15 10:52:50.336: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 6.055786327s Sep 15 10:52:52.339: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 8.059268192s Sep 15 10:52:54.345: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 10.064482119s Sep 15 10:52:56.349: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 12.068442475s Sep 15 10:52:58.353: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 14.073007093s Sep 15 10:53:00.358: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 16.078190062s Sep 15 10:53:02.404: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 18.123902618s Sep 15 10:53:04.409: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 20.12888959s Sep 15 10:53:06.414: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 22.133607334s Sep 15 10:53:08.419: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Running", Reason="", readiness=true. Elapsed: 24.138567393s Sep 15 10:53:10.424: INFO: Pod "pod-subpath-test-configmap-nmsb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.143620446s STEP: Saw pod success Sep 15 10:53:10.424: INFO: Pod "pod-subpath-test-configmap-nmsb" satisfied condition "Succeeded or Failed" Sep 15 10:53:10.427: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-nmsb container test-container-subpath-configmap-nmsb: STEP: delete the pod Sep 15 10:53:10.483: INFO: Waiting for pod pod-subpath-test-configmap-nmsb to disappear Sep 15 10:53:10.495: INFO: Pod pod-subpath-test-configmap-nmsb no longer exists STEP: Deleting pod pod-subpath-test-configmap-nmsb Sep 15 10:53:10.495: INFO: Deleting pod "pod-subpath-test-configmap-nmsb" in namespace "subpath-9485" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:53:10.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9485" for this suite. • [SLOW TEST:26.331 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":81,"skipped":1508,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:53:10.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2099 STEP: creating service affinity-clusterip-transition in namespace services-2099 STEP: creating replication controller affinity-clusterip-transition in namespace services-2099 I0915 10:53:10.659573 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-2099, replica count: 3 I0915 10:53:13.709986 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 10:53:16.710314 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 10:53:16.717: INFO: Creating new exec pod Sep 15 10:53:21.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2099 execpod-affinitybdxhc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Sep 15 10:53:25.824: INFO: stderr: "I0915 10:53:25.728818 1145 log.go:181] (0xc00018cd10) (0xc000da6320) Create stream\nI0915 10:53:25.728899 1145 log.go:181] (0xc00018cd10) (0xc000da6320) Stream added, broadcasting: 1\nI0915 10:53:25.734228 1145 log.go:181] (0xc00018cd10) Reply frame received for 1\nI0915 10:53:25.734292 1145 log.go:181] (0xc00018cd10) (0xc000b59360) Create stream\nI0915 10:53:25.734310 1145 log.go:181] (0xc00018cd10) (0xc000b59360) Stream added, broadcasting: 3\nI0915 10:53:25.735423 1145 log.go:181] (0xc00018cd10) Reply frame received for 3\nI0915 10:53:25.735449 1145 log.go:181] (0xc00018cd10) (0xc000da63c0) Create stream\nI0915 10:53:25.735459 1145 log.go:181] (0xc00018cd10) (0xc000da63c0) Stream added, broadcasting: 5\nI0915 10:53:25.736386 1145 log.go:181] (0xc00018cd10) Reply frame received for 5\nI0915 10:53:25.816406 1145 log.go:181] (0xc00018cd10) Data frame received for 5\nI0915 10:53:25.816440 1145 log.go:181] (0xc000da63c0) (5) Data frame handling\nI0915 10:53:25.816455 1145 log.go:181] (0xc000da63c0) (5) Data frame sent\nI0915 10:53:25.816465 1145 log.go:181] (0xc00018cd10) Data frame received for 5\nI0915 10:53:25.816478 1145 log.go:181] (0xc000da63c0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0915 10:53:25.816509 1145 log.go:181] (0xc000da63c0) (5) Data frame sent\nI0915 10:53:25.816859 1145 log.go:181] (0xc00018cd10) Data frame received for 3\nI0915 10:53:25.816885 1145 log.go:181] (0xc000b59360) (3) Data frame handling\nI0915 10:53:25.817154 1145 log.go:181] (0xc00018cd10) Data frame received for 5\nI0915 10:53:25.817177 1145 log.go:181] (0xc000da63c0) (5) Data frame handling\nI0915 10:53:25.819131 1145 log.go:181] (0xc00018cd10) Data frame received for 1\nI0915 10:53:25.819152 1145 log.go:181] (0xc000da6320) (1) Data frame handling\nI0915 10:53:25.819165 1145 log.go:181] (0xc000da6320) (1) Data frame sent\nI0915 10:53:25.819180 1145 log.go:181] (0xc00018cd10) (0xc000da6320) Stream removed, broadcasting: 1\nI0915 10:53:25.819193 1145 log.go:181] (0xc00018cd10) Go away received\nI0915 10:53:25.819616 1145 log.go:181] (0xc00018cd10) (0xc000da6320) Stream removed, broadcasting: 1\nI0915 10:53:25.819651 1145 log.go:181] (0xc00018cd10) (0xc000b59360) Stream removed, broadcasting: 3\nI0915 10:53:25.819677 1145 log.go:181] (0xc00018cd10) (0xc000da63c0) Stream removed, broadcasting: 5\n" Sep 15 10:53:25.824: INFO: stdout: "" Sep 15 10:53:25.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2099 execpod-affinitybdxhc -- /bin/sh -x -c nc -zv -t -w 2 10.106.172.96 80' Sep 15 10:53:26.044: INFO: stderr: "I0915 10:53:25.967716 1164 log.go:181] (0xc000e073f0) (0xc0008be780) Create stream\nI0915 10:53:25.967769 1164 log.go:181] (0xc000e073f0) (0xc0008be780) Stream added, broadcasting: 1\nI0915 10:53:25.973330 1164 log.go:181] (0xc000e073f0) Reply frame received for 1\nI0915 10:53:25.973395 1164 log.go:181] (0xc000e073f0) (0xc000cfe000) Create stream\nI0915 10:53:25.973433 1164 log.go:181] (0xc000e073f0) (0xc000cfe000) Stream added, broadcasting: 3\nI0915 10:53:25.974602 1164 log.go:181] (0xc000e073f0) Reply frame received for 3\nI0915 10:53:25.974658 1164 log.go:181] (0xc000e073f0) (0xc000308500) Create stream\nI0915 10:53:25.974672 1164 log.go:181] (0xc000e073f0) (0xc000308500) Stream added, broadcasting: 5\nI0915 10:53:25.975815 1164 log.go:181] (0xc000e073f0) Reply frame received for 5\nI0915 10:53:26.037276 1164 log.go:181] (0xc000e073f0) Data frame received for 3\nI0915 10:53:26.037308 1164 log.go:181] (0xc000cfe000) (3) Data frame handling\nI0915 10:53:26.037778 1164 log.go:181] (0xc000e073f0) Data frame received for 5\nI0915 10:53:26.037814 1164 log.go:181] (0xc000308500) (5) Data frame handling\nI0915 10:53:26.037847 1164 log.go:181] (0xc000308500) (5) Data frame sent\nI0915 10:53:26.037873 1164 log.go:181] (0xc000e073f0) Data frame received for 5\nI0915 10:53:26.037886 1164 log.go:181] (0xc000308500) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.172.96 80\nConnection to 10.106.172.96 80 port [tcp/http] succeeded!\nI0915 10:53:26.039140 1164 log.go:181] (0xc000e073f0) Data frame received for 1\nI0915 10:53:26.039163 1164 log.go:181] (0xc0008be780) (1) Data frame handling\nI0915 10:53:26.039180 1164 log.go:181] (0xc0008be780) (1) Data frame sent\nI0915 10:53:26.039196 1164 log.go:181] (0xc000e073f0) (0xc0008be780) Stream removed, broadcasting: 1\nI0915 10:53:26.039226 1164 log.go:181] (0xc000e073f0) Go away received\nI0915 10:53:26.039599 1164 log.go:181] (0xc000e073f0) (0xc0008be780) Stream removed, broadcasting: 1\nI0915 10:53:26.039620 1164 log.go:181] (0xc000e073f0) (0xc000cfe000) Stream removed, broadcasting: 3\nI0915 10:53:26.039630 1164 log.go:181] (0xc000e073f0) (0xc000308500) Stream removed, broadcasting: 5\n" Sep 15 10:53:26.044: INFO: stdout: "" Sep 15 10:53:26.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2099 execpod-affinitybdxhc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.172.96:80/ ; done' Sep 15 10:53:26.402: INFO: stderr: "I0915 10:53:26.233232 1182 log.go:181] (0xc000e1b3f0) (0xc000e26960) Create stream\nI0915 10:53:26.233333 1182 log.go:181] (0xc000e1b3f0) (0xc000e26960) Stream added, broadcasting: 1\nI0915 10:53:26.241857 1182 log.go:181] (0xc000e1b3f0) Reply frame received for 1\nI0915 10:53:26.241898 1182 log.go:181] (0xc000e1b3f0) (0xc000b84000) Create stream\nI0915 10:53:26.241908 1182 log.go:181] (0xc000e1b3f0) (0xc000b84000) Stream added, broadcasting: 3\nI0915 10:53:26.242959 1182 log.go:181] (0xc000e1b3f0) Reply frame received for 3\nI0915 10:53:26.242989 1182 log.go:181] (0xc000e1b3f0) (0xc000e26000) Create stream\nI0915 10:53:26.242998 1182 log.go:181] (0xc000e1b3f0) (0xc000e26000) Stream added, broadcasting: 5\nI0915 10:53:26.243815 1182 log.go:181] (0xc000e1b3f0) Reply frame received for 5\nI0915 10:53:26.305253 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.305298 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.305320 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.305406 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.305441 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.305468 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.311758 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.311804 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.311821 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.311885 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.311915 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.311931 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.316739 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.316762 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.316778 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.317456 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.317470 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.317477 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.317497 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.317515 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.317539 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.321940 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.321955 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.321967 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.322781 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.322798 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.322807 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.322917 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.322934 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.322951 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.327158 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.327174 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.327187 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.327649 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.327683 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.327698 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.327715 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.327724 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.327734 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.332985 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.333005 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.333023 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.333588 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.333610 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.333622 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.333644 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.333667 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.333682 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.337922 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.337941 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.337975 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.338502 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.338519 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.338537 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.338561 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.338577 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.338597 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.345642 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.345657 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.345668 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.346255 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.346268 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.346276 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.346286 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.346291 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.346298 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.351775 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.351802 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.351826 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.352530 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.352544 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.352551 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.352581 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.352604 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.352622 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.358248 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.358272 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.358293 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.359150 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.359180 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.359191 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.359208 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.359217 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.359226 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.364689 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.364725 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.364746 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.365203 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.365234 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.365254 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.365276 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.365291 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.365307 1182 log.go:181] (0xc000e26000) (5) Data frame sent\nI0915 10:53:26.365322 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeoutI0915 10:53:26.365340 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.365383 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n 2 http://10.106.172.96:80/\nI0915 10:53:26.371053 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.371090 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.371113 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.371479 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.371510 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.371533 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.371560 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.371572 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.371602 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.378345 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.378363 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.378379 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.379344 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.379370 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.379392 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.379403 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.379412 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.379432 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.382729 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.382752 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.382782 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.383342 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.383354 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.383364 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.383385 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.383396 1182 log.go:181] (0xc000e26000) (5) Data frame sent\nI0915 10:53:26.383402 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.383411 1182 log.go:181] (0xc000e26000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.383427 1182 log.go:181] (0xc000e26000) (5) Data frame sent\nI0915 10:53:26.383437 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.386647 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.386660 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.386665 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.387595 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.387621 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.387646 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.387670 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.387679 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.387696 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.390839 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.390854 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.390865 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.391318 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.391345 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.391359 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.391370 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.391375 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.391381 1182 log.go:181] (0xc000e26000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.396288 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.396301 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.396306 1182 log.go:181] (0xc000b84000) (3) Data frame sent\nI0915 10:53:26.397162 1182 log.go:181] (0xc000e1b3f0) Data frame received for 3\nI0915 10:53:26.397171 1182 log.go:181] (0xc000b84000) (3) Data frame handling\nI0915 10:53:26.397290 1182 log.go:181] (0xc000e1b3f0) Data frame received for 5\nI0915 10:53:26.397304 1182 log.go:181] (0xc000e26000) (5) Data frame handling\nI0915 10:53:26.398682 1182 log.go:181] (0xc000e1b3f0) Data frame received for 1\nI0915 10:53:26.398692 1182 log.go:181] (0xc000e26960) (1) Data frame handling\nI0915 10:53:26.398697 1182 log.go:181] (0xc000e26960) (1) Data frame sent\nI0915 10:53:26.398818 1182 log.go:181] (0xc000e1b3f0) (0xc000e26960) Stream removed, broadcasting: 1\nI0915 10:53:26.398905 1182 log.go:181] (0xc000e1b3f0) Go away received\nI0915 10:53:26.399084 1182 log.go:181] (0xc000e1b3f0) (0xc000e26960) Stream removed, broadcasting: 1\nI0915 10:53:26.399094 1182 log.go:181] (0xc000e1b3f0) (0xc000b84000) Stream removed, broadcasting: 3\nI0915 10:53:26.399099 1182 log.go:181] (0xc000e1b3f0) (0xc000e26000) Stream removed, broadcasting: 5\n" Sep 15 10:53:26.402: INFO: stdout: "\naffinity-clusterip-transition-9zph5\naffinity-clusterip-transition-wmqt7\naffinity-clusterip-transition-9zph5\naffinity-clusterip-transition-9zph5\naffinity-clusterip-transition-9zph5\naffinity-clusterip-transition-wmqt7\naffinity-clusterip-transition-wmqt7\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-9zph5\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-9zph5\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-wmqt7\naffinity-clusterip-transition-wmqt7" Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-9zph5 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-wmqt7 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-9zph5 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-9zph5 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-9zph5 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-wmqt7 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-wmqt7 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-9zph5 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-9zph5 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-wmqt7 Sep 15 10:53:26.402: INFO: Received response from host: affinity-clusterip-transition-wmqt7 Sep 15 10:53:26.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2099 execpod-affinitybdxhc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.172.96:80/ ; done' Sep 15 10:53:26.713: INFO: stderr: "I0915 10:53:26.543576 1200 log.go:181] (0xc0009e51e0) (0xc0008e4aa0) Create stream\nI0915 10:53:26.543635 1200 log.go:181] (0xc0009e51e0) (0xc0008e4aa0) Stream added, broadcasting: 1\nI0915 10:53:26.547835 1200 log.go:181] (0xc0009e51e0) Reply frame received for 1\nI0915 10:53:26.547867 1200 log.go:181] (0xc0009e51e0) (0xc0008e4000) Create stream\nI0915 10:53:26.547874 1200 log.go:181] (0xc0009e51e0) (0xc0008e4000) Stream added, broadcasting: 3\nI0915 10:53:26.548699 1200 log.go:181] (0xc0009e51e0) Reply frame received for 3\nI0915 10:53:26.548730 1200 log.go:181] (0xc0009e51e0) (0xc00021bea0) Create stream\nI0915 10:53:26.548740 1200 log.go:181] (0xc0009e51e0) (0xc00021bea0) Stream added, broadcasting: 5\nI0915 10:53:26.549442 1200 log.go:181] (0xc0009e51e0) Reply frame received for 5\nI0915 10:53:26.601247 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.601289 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.601305 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.601320 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.601354 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.601366 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.607509 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.607534 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.607554 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.608468 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.608493 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.608512 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.608618 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.608659 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.608697 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.614641 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.614668 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.614688 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.615117 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.615166 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.615193 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.615225 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.615245 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.615276 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.622572 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.622602 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.622641 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.623366 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.623387 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.623400 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.623417 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.623441 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.623457 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.627886 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.627898 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.627906 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.628985 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.628997 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.629002 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.629023 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.629043 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.629076 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.635999 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.636022 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.636041 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.636526 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.636540 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.636547 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.636639 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.636655 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.636673 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0915 10:53:26.636685 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.636693 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.636701 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n 2 http://10.106.172.96:80/\nI0915 10:53:26.643174 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.643186 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.643194 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.643609 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.643620 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.643626 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.643638 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.643643 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.643656 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.650093 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.650121 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.650158 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.650442 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.650472 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.650493 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.650519 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.650540 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.650558 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.655816 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.655839 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.655854 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.656807 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.656831 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.656840 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.656853 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.656861 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.656871 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.663626 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.663654 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.663679 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.664127 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.664211 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.664228 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\nI0915 10:53:26.664236 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.664242 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.664250 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.664256 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.664263 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.664286 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\nI0915 10:53:26.669873 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.669893 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.669904 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.670620 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.670644 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.670656 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.670668 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.670677 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.670691 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.674295 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.674327 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.674354 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.675126 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.675169 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.675187 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.675205 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.675216 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.675233 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.678862 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.678887 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.678904 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.679747 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.679795 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.679816 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.679837 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.679854 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.679891 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.686017 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.686047 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.686077 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.686883 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.686906 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.686925 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.686951 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.686972 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.686988 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.692874 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.692899 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.692932 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.693396 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.693412 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.693430 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.693458 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.693471 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.693490 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.698664 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.698687 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.698700 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.699590 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.699619 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.699632 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.699650 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.699663 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.699769 1200 log.go:181] (0xc00021bea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.172.96:80/\nI0915 10:53:26.705267 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.705290 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.705303 1200 log.go:181] (0xc0008e4000) (3) Data frame sent\nI0915 10:53:26.706323 1200 log.go:181] (0xc0009e51e0) Data frame received for 5\nI0915 10:53:26.706351 1200 log.go:181] (0xc00021bea0) (5) Data frame handling\nI0915 10:53:26.706899 1200 log.go:181] (0xc0009e51e0) Data frame received for 3\nI0915 10:53:26.706926 1200 log.go:181] (0xc0008e4000) (3) Data frame handling\nI0915 10:53:26.709015 1200 log.go:181] (0xc0009e51e0) Data frame received for 1\nI0915 10:53:26.709044 1200 log.go:181] (0xc0008e4aa0) (1) Data frame handling\nI0915 10:53:26.709129 1200 log.go:181] (0xc0008e4aa0) (1) Data frame sent\nI0915 10:53:26.709147 1200 log.go:181] (0xc0009e51e0) (0xc0008e4aa0) Stream removed, broadcasting: 1\nI0915 10:53:26.709169 1200 log.go:181] (0xc0009e51e0) Go away received\nI0915 10:53:26.709580 1200 log.go:181] (0xc0009e51e0) (0xc0008e4aa0) Stream removed, broadcasting: 1\nI0915 10:53:26.709602 1200 log.go:181] (0xc0009e51e0) (0xc0008e4000) Stream removed, broadcasting: 3\nI0915 10:53:26.709612 1200 log.go:181] (0xc0009e51e0) (0xc00021bea0) Stream removed, broadcasting: 5\n" Sep 15 10:53:26.714: INFO: stdout: "\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b\naffinity-clusterip-transition-l5q6b" Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Received response from host: affinity-clusterip-transition-l5q6b Sep 15 10:53:26.714: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2099, will wait for the garbage collector to delete the pods Sep 15 10:53:27.155: INFO: Deleting ReplicationController affinity-clusterip-transition took: 345.3353ms Sep 15 10:53:27.656: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.297122ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:53:43.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2099" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:32.810 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":82,"skipped":1523,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:53:43.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3861 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-3861 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3861 Sep 15 10:53:43.433: INFO: Found 0 stateful pods, waiting for 1 Sep 15 10:53:53.438: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 15 10:53:53.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3861 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 10:53:53.737: INFO: stderr: "I0915 10:53:53.584888 1219 log.go:181] (0xc0005da000) (0xc000d3a000) Create stream\nI0915 10:53:53.584956 1219 log.go:181] (0xc0005da000) (0xc000d3a000) Stream added, broadcasting: 1\nI0915 10:53:53.586718 1219 log.go:181] (0xc0005da000) Reply frame received for 1\nI0915 10:53:53.586754 1219 log.go:181] (0xc0005da000) (0xc000a0d4a0) Create stream\nI0915 10:53:53.586763 1219 log.go:181] (0xc0005da000) (0xc000a0d4a0) Stream added, broadcasting: 3\nI0915 10:53:53.587723 1219 log.go:181] (0xc0005da000) Reply frame received for 3\nI0915 10:53:53.587773 1219 log.go:181] (0xc0005da000) (0xc000b12000) Create stream\nI0915 10:53:53.587791 1219 log.go:181] (0xc0005da000) (0xc000b12000) Stream added, broadcasting: 5\nI0915 10:53:53.588737 1219 log.go:181] (0xc0005da000) Reply frame received for 5\nI0915 10:53:53.673640 1219 log.go:181] (0xc0005da000) Data frame received for 5\nI0915 10:53:53.673667 1219 log.go:181] (0xc000b12000) (5) Data frame handling\nI0915 10:53:53.673683 1219 log.go:181] (0xc000b12000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 10:53:53.730936 1219 log.go:181] (0xc0005da000) Data frame received for 3\nI0915 10:53:53.730990 1219 log.go:181] (0xc000a0d4a0) (3) Data frame handling\nI0915 10:53:53.731008 1219 log.go:181] (0xc000a0d4a0) (3) Data frame sent\nI0915 10:53:53.731023 1219 log.go:181] (0xc0005da000) Data frame received for 3\nI0915 10:53:53.731038 1219 log.go:181] (0xc000a0d4a0) (3) Data frame handling\nI0915 10:53:53.731056 1219 log.go:181] (0xc0005da000) Data frame received for 5\nI0915 10:53:53.731069 1219 log.go:181] (0xc000b12000) (5) Data frame handling\nI0915 10:53:53.733106 1219 log.go:181] (0xc0005da000) Data frame received for 1\nI0915 10:53:53.733133 1219 log.go:181] (0xc000d3a000) (1) Data frame handling\nI0915 10:53:53.733151 1219 log.go:181] (0xc000d3a000) (1) Data frame sent\nI0915 10:53:53.733181 1219 log.go:181] (0xc0005da000) (0xc000d3a000) Stream removed, broadcasting: 1\nI0915 10:53:53.733196 1219 log.go:181] (0xc0005da000) Go away received\nI0915 10:53:53.733611 1219 log.go:181] (0xc0005da000) (0xc000d3a000) Stream removed, broadcasting: 1\nI0915 10:53:53.733630 1219 log.go:181] (0xc0005da000) (0xc000a0d4a0) Stream removed, broadcasting: 3\nI0915 10:53:53.733638 1219 log.go:181] (0xc0005da000) (0xc000b12000) Stream removed, broadcasting: 5\n" Sep 15 10:53:53.737: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 10:53:53.737: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 15 10:53:53.740: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 15 10:54:03.745: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 15 10:54:03.745: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 10:54:03.776: INFO: POD NODE PHASE GRACE CONDITIONS Sep 15 10:54:03.776: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:43 +0000 UTC }] Sep 15 10:54:03.776: INFO: Sep 15 10:54:03.776: INFO: StatefulSet ss has not reached scale 3, at 1 Sep 15 10:54:04.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979026804s Sep 15 10:54:05.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.906622229s Sep 15 10:54:06.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.901600358s Sep 15 10:54:07.864: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.897664039s Sep 15 10:54:08.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.89080574s Sep 15 10:54:09.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.884866061s Sep 15 10:54:10.881: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.879732747s Sep 15 10:54:11.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.873526126s Sep 15 10:54:12.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 868.010004ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3861 Sep 15 10:54:13.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3861 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 15 10:54:14.122: INFO: stderr: "I0915 10:54:14.021093 1238 log.go:181] (0xc000c54210) (0xc000812e60) Create stream\nI0915 10:54:14.021133 1238 log.go:181] (0xc000c54210) (0xc000812e60) Stream added, broadcasting: 1\nI0915 10:54:14.023221 1238 log.go:181] (0xc000c54210) Reply frame received for 1\nI0915 10:54:14.023276 1238 log.go:181] (0xc000c54210) (0xc000936280) Create stream\nI0915 10:54:14.023289 1238 log.go:181] (0xc000c54210) (0xc000936280) Stream added, broadcasting: 3\nI0915 10:54:14.024274 1238 log.go:181] (0xc000c54210) Reply frame received for 3\nI0915 10:54:14.024320 1238 log.go:181] (0xc000c54210) (0xc0002a8500) Create stream\nI0915 10:54:14.024337 1238 log.go:181] (0xc000c54210) (0xc0002a8500) Stream added, broadcasting: 5\nI0915 10:54:14.025276 1238 log.go:181] (0xc000c54210) Reply frame received for 5\nI0915 10:54:14.115928 1238 log.go:181] (0xc000c54210) Data frame received for 3\nI0915 10:54:14.115964 1238 log.go:181] (0xc000936280) (3) Data frame handling\nI0915 10:54:14.115975 1238 log.go:181] (0xc000936280) (3) Data frame sent\nI0915 10:54:14.116039 1238 log.go:181] (0xc000c54210) Data frame received for 5\nI0915 10:54:14.116090 1238 log.go:181] (0xc0002a8500) (5) Data frame handling\nI0915 10:54:14.116124 1238 log.go:181] (0xc0002a8500) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0915 10:54:14.116259 1238 log.go:181] (0xc000c54210) Data frame received for 5\nI0915 10:54:14.116306 1238 log.go:181] (0xc0002a8500) (5) Data frame handling\nI0915 10:54:14.116341 1238 log.go:181] (0xc000c54210) Data frame received for 3\nI0915 10:54:14.116367 1238 log.go:181] (0xc000936280) (3) Data frame handling\nI0915 10:54:14.117776 1238 log.go:181] (0xc000c54210) Data frame received for 1\nI0915 10:54:14.117796 1238 log.go:181] (0xc000812e60) (1) Data frame handling\nI0915 10:54:14.117819 1238 log.go:181] (0xc000812e60) (1) Data frame sent\nI0915 10:54:14.117905 1238 log.go:181] (0xc000c54210) (0xc000812e60) Stream removed, broadcasting: 1\nI0915 10:54:14.117957 1238 log.go:181] (0xc000c54210) Go away received\nI0915 10:54:14.118350 1238 log.go:181] (0xc000c54210) (0xc000812e60) Stream removed, broadcasting: 1\nI0915 10:54:14.118375 1238 log.go:181] (0xc000c54210) (0xc000936280) Stream removed, broadcasting: 3\nI0915 10:54:14.118390 1238 log.go:181] (0xc000c54210) (0xc0002a8500) Stream removed, broadcasting: 5\n" Sep 15 10:54:14.123: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 15 10:54:14.123: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 15 10:54:14.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3861 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 15 10:54:14.334: INFO: stderr: "I0915 10:54:14.251777 1255 log.go:181] (0xc000abc000) (0xc000a18140) Create stream\nI0915 10:54:14.251825 1255 log.go:181] (0xc000abc000) (0xc000a18140) Stream added, broadcasting: 1\nI0915 10:54:14.253963 1255 log.go:181] (0xc000abc000) Reply frame received for 1\nI0915 10:54:14.253985 1255 log.go:181] (0xc000abc000) (0xc000a181e0) Create stream\nI0915 10:54:14.253992 1255 log.go:181] (0xc000abc000) (0xc000a181e0) Stream added, broadcasting: 3\nI0915 10:54:14.255041 1255 log.go:181] (0xc000abc000) Reply frame received for 3\nI0915 10:54:14.255093 1255 log.go:181] (0xc000abc000) (0xc000d5e0a0) Create stream\nI0915 10:54:14.255111 1255 log.go:181] (0xc000abc000) (0xc000d5e0a0) Stream added, broadcasting: 5\nI0915 10:54:14.256305 1255 log.go:181] (0xc000abc000) Reply frame received for 5\nI0915 10:54:14.327546 1255 log.go:181] (0xc000abc000) Data frame received for 5\nI0915 10:54:14.327583 1255 log.go:181] (0xc000d5e0a0) (5) Data frame handling\nI0915 10:54:14.327598 1255 log.go:181] (0xc000d5e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0915 10:54:14.327619 1255 log.go:181] (0xc000abc000) Data frame received for 5\nI0915 10:54:14.327633 1255 log.go:181] (0xc000d5e0a0) (5) Data frame handling\nI0915 10:54:14.327655 1255 log.go:181] (0xc000abc000) Data frame received for 3\nI0915 10:54:14.327665 1255 log.go:181] (0xc000a181e0) (3) Data frame handling\nI0915 10:54:14.327677 1255 log.go:181] (0xc000a181e0) (3) Data frame sent\nI0915 10:54:14.327690 1255 log.go:181] (0xc000abc000) Data frame received for 3\nI0915 10:54:14.327707 1255 log.go:181] (0xc000a181e0) (3) Data frame handling\nI0915 10:54:14.329434 1255 log.go:181] (0xc000abc000) Data frame received for 1\nI0915 10:54:14.329467 1255 log.go:181] (0xc000a18140) (1) Data frame handling\nI0915 10:54:14.329486 1255 log.go:181] (0xc000a18140) (1) Data frame sent\nI0915 10:54:14.329507 1255 log.go:181] (0xc000abc000) (0xc000a18140) Stream removed, broadcasting: 1\nI0915 10:54:14.329580 1255 log.go:181] (0xc000abc000) Go away received\nI0915 10:54:14.329925 1255 log.go:181] (0xc000abc000) (0xc000a18140) Stream removed, broadcasting: 1\nI0915 10:54:14.329941 1255 log.go:181] (0xc000abc000) (0xc000a181e0) Stream removed, broadcasting: 3\nI0915 10:54:14.329950 1255 log.go:181] (0xc000abc000) (0xc000d5e0a0) Stream removed, broadcasting: 5\n" Sep 15 10:54:14.335: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 15 10:54:14.335: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 15 10:54:14.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3861 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 15 10:54:14.545: INFO: stderr: "I0915 10:54:14.468438 1273 log.go:181] (0xc0007b73f0) (0xc0007aea00) Create stream\nI0915 10:54:14.468495 1273 log.go:181] (0xc0007b73f0) (0xc0007aea00) Stream added, broadcasting: 1\nI0915 10:54:14.473360 1273 log.go:181] (0xc0007b73f0) Reply frame received for 1\nI0915 10:54:14.473394 1273 log.go:181] (0xc0007b73f0) (0xc0007ae000) Create stream\nI0915 10:54:14.473403 1273 log.go:181] (0xc0007b73f0) (0xc0007ae000) Stream added, broadcasting: 3\nI0915 10:54:14.474560 1273 log.go:181] (0xc0007b73f0) Reply frame received for 3\nI0915 10:54:14.474615 1273 log.go:181] (0xc0007b73f0) (0xc000824280) Create stream\nI0915 10:54:14.474636 1273 log.go:181] (0xc0007b73f0) (0xc000824280) Stream added, broadcasting: 5\nI0915 10:54:14.475805 1273 log.go:181] (0xc0007b73f0) Reply frame received for 5\nI0915 10:54:14.537892 1273 log.go:181] (0xc0007b73f0) Data frame received for 5\nI0915 10:54:14.537919 1273 log.go:181] (0xc000824280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0915 10:54:14.537937 1273 log.go:181] (0xc0007b73f0) Data frame received for 3\nI0915 10:54:14.537960 1273 log.go:181] (0xc0007ae000) (3) Data frame handling\nI0915 10:54:14.537969 1273 log.go:181] (0xc0007ae000) (3) Data frame sent\nI0915 10:54:14.537977 1273 log.go:181] (0xc0007b73f0) Data frame received for 3\nI0915 10:54:14.537983 1273 log.go:181] (0xc0007ae000) (3) Data frame handling\nI0915 10:54:14.538008 1273 log.go:181] (0xc000824280) (5) Data frame sent\nI0915 10:54:14.538015 1273 log.go:181] (0xc0007b73f0) Data frame received for 5\nI0915 10:54:14.538021 1273 log.go:181] (0xc000824280) (5) Data frame handling\nI0915 10:54:14.539827 1273 log.go:181] (0xc0007b73f0) Data frame received for 1\nI0915 10:54:14.539853 1273 log.go:181] (0xc0007aea00) (1) Data frame handling\nI0915 10:54:14.539870 1273 log.go:181] (0xc0007aea00) (1) Data frame sent\nI0915 10:54:14.539883 1273 log.go:181] (0xc0007b73f0) (0xc0007aea00) Stream removed, broadcasting: 1\nI0915 10:54:14.539924 1273 log.go:181] (0xc0007b73f0) Go away received\nI0915 10:54:14.540377 1273 log.go:181] (0xc0007b73f0) (0xc0007aea00) Stream removed, broadcasting: 1\nI0915 10:54:14.540395 1273 log.go:181] (0xc0007b73f0) (0xc0007ae000) Stream removed, broadcasting: 3\nI0915 10:54:14.540405 1273 log.go:181] (0xc0007b73f0) (0xc000824280) Stream removed, broadcasting: 5\n" Sep 15 10:54:14.545: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 15 10:54:14.545: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 15 10:54:14.549: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 15 10:54:14.549: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 15 10:54:14.549: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 15 10:54:14.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3861 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 10:54:14.766: INFO: stderr: "I0915 10:54:14.687440 1291 log.go:181] (0xc0005fc000) (0xc0005f4000) Create stream\nI0915 10:54:14.687516 1291 log.go:181] (0xc0005fc000) (0xc0005f4000) Stream added, broadcasting: 1\nI0915 10:54:14.689446 1291 log.go:181] (0xc0005fc000) Reply frame received for 1\nI0915 10:54:14.689499 1291 log.go:181] (0xc0005fc000) (0xc0005f40a0) Create stream\nI0915 10:54:14.689512 1291 log.go:181] (0xc0005fc000) (0xc0005f40a0) Stream added, broadcasting: 3\nI0915 10:54:14.690491 1291 log.go:181] (0xc0005fc000) Reply frame received for 3\nI0915 10:54:14.690516 1291 log.go:181] (0xc0005fc000) (0xc000457e00) Create stream\nI0915 10:54:14.690523 1291 log.go:181] (0xc0005fc000) (0xc000457e00) Stream added, broadcasting: 5\nI0915 10:54:14.691401 1291 log.go:181] (0xc0005fc000) Reply frame received for 5\nI0915 10:54:14.759595 1291 log.go:181] (0xc0005fc000) Data frame received for 3\nI0915 10:54:14.759642 1291 log.go:181] (0xc0005f40a0) (3) Data frame handling\nI0915 10:54:14.759660 1291 log.go:181] (0xc0005f40a0) (3) Data frame sent\nI0915 10:54:14.759672 1291 log.go:181] (0xc0005fc000) Data frame received for 3\nI0915 10:54:14.759683 1291 log.go:181] (0xc0005f40a0) (3) Data frame handling\nI0915 10:54:14.759724 1291 log.go:181] (0xc0005fc000) Data frame received for 5\nI0915 10:54:14.759749 1291 log.go:181] (0xc000457e00) (5) Data frame handling\nI0915 10:54:14.759779 1291 log.go:181] (0xc000457e00) (5) Data frame sent\nI0915 10:54:14.759796 1291 log.go:181] (0xc0005fc000) Data frame received for 5\nI0915 10:54:14.759809 1291 log.go:181] (0xc000457e00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 10:54:14.761610 1291 log.go:181] (0xc0005fc000) Data frame received for 1\nI0915 10:54:14.761638 1291 log.go:181] (0xc0005f4000) (1) Data frame handling\nI0915 10:54:14.761654 1291 log.go:181] (0xc0005f4000) (1) Data frame sent\nI0915 10:54:14.761682 1291 log.go:181] (0xc0005fc000) (0xc0005f4000) Stream removed, broadcasting: 1\nI0915 10:54:14.761705 1291 log.go:181] (0xc0005fc000) Go away received\nI0915 10:54:14.761972 1291 log.go:181] (0xc0005fc000) (0xc0005f4000) Stream removed, broadcasting: 1\nI0915 10:54:14.761988 1291 log.go:181] (0xc0005fc000) (0xc0005f40a0) Stream removed, broadcasting: 3\nI0915 10:54:14.761997 1291 log.go:181] (0xc0005fc000) (0xc000457e00) Stream removed, broadcasting: 5\n" Sep 15 10:54:14.766: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 10:54:14.766: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 15 10:54:14.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3861 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 10:54:15.018: INFO: stderr: "I0915 10:54:14.917666 1309 log.go:181] (0xc0005c53f0) (0xc0005368c0) Create stream\nI0915 10:54:14.917725 1309 log.go:181] (0xc0005c53f0) (0xc0005368c0) Stream added, broadcasting: 1\nI0915 10:54:14.923412 1309 log.go:181] (0xc0005c53f0) Reply frame received for 1\nI0915 10:54:14.923457 1309 log.go:181] (0xc0005c53f0) (0xc000536000) Create stream\nI0915 10:54:14.923486 1309 log.go:181] (0xc0005c53f0) (0xc000536000) Stream added, broadcasting: 3\nI0915 10:54:14.924706 1309 log.go:181] (0xc0005c53f0) Reply frame received for 3\nI0915 10:54:14.924754 1309 log.go:181] (0xc0005c53f0) (0xc0005360a0) Create stream\nI0915 10:54:14.924767 1309 log.go:181] (0xc0005c53f0) (0xc0005360a0) Stream added, broadcasting: 5\nI0915 10:54:14.925811 1309 log.go:181] (0xc0005c53f0) Reply frame received for 5\nI0915 10:54:14.981947 1309 log.go:181] (0xc0005c53f0) Data frame received for 5\nI0915 10:54:14.981979 1309 log.go:181] (0xc0005360a0) (5) Data frame handling\nI0915 10:54:14.982015 1309 log.go:181] (0xc0005360a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 10:54:15.010135 1309 log.go:181] (0xc0005c53f0) Data frame received for 3\nI0915 10:54:15.010185 1309 log.go:181] (0xc000536000) (3) Data frame handling\nI0915 10:54:15.010218 1309 log.go:181] (0xc000536000) (3) Data frame sent\nI0915 10:54:15.010373 1309 log.go:181] (0xc0005c53f0) Data frame received for 3\nI0915 10:54:15.010399 1309 log.go:181] (0xc000536000) (3) Data frame handling\nI0915 10:54:15.010872 1309 log.go:181] (0xc0005c53f0) Data frame received for 5\nI0915 10:54:15.010886 1309 log.go:181] (0xc0005360a0) (5) Data frame handling\nI0915 10:54:15.013157 1309 log.go:181] (0xc0005c53f0) Data frame received for 1\nI0915 10:54:15.013171 1309 log.go:181] (0xc0005368c0) (1) Data frame handling\nI0915 10:54:15.013179 1309 log.go:181] (0xc0005368c0) (1) Data frame sent\nI0915 10:54:15.013191 1309 log.go:181] (0xc0005c53f0) (0xc0005368c0) Stream removed, broadcasting: 1\nI0915 10:54:15.013205 1309 log.go:181] (0xc0005c53f0) Go away received\nI0915 10:54:15.013572 1309 log.go:181] (0xc0005c53f0) (0xc0005368c0) Stream removed, broadcasting: 1\nI0915 10:54:15.013594 1309 log.go:181] (0xc0005c53f0) (0xc000536000) Stream removed, broadcasting: 3\nI0915 10:54:15.013603 1309 log.go:181] (0xc0005c53f0) (0xc0005360a0) Stream removed, broadcasting: 5\n" Sep 15 10:54:15.018: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 10:54:15.018: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 15 10:54:15.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3861 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 10:54:15.262: INFO: stderr: "I0915 10:54:15.148029 1327 log.go:181] (0xc00003a420) (0xc000309360) Create stream\nI0915 10:54:15.148115 1327 log.go:181] (0xc00003a420) (0xc000309360) Stream added, broadcasting: 1\nI0915 10:54:15.149884 1327 log.go:181] (0xc00003a420) Reply frame received for 1\nI0915 10:54:15.149929 1327 log.go:181] (0xc00003a420) (0xc0003095e0) Create stream\nI0915 10:54:15.149952 1327 log.go:181] (0xc00003a420) (0xc0003095e0) Stream added, broadcasting: 3\nI0915 10:54:15.150934 1327 log.go:181] (0xc00003a420) Reply frame received for 3\nI0915 10:54:15.150971 1327 log.go:181] (0xc00003a420) (0xc000309ea0) Create stream\nI0915 10:54:15.150980 1327 log.go:181] (0xc00003a420) (0xc000309ea0) Stream added, broadcasting: 5\nI0915 10:54:15.151667 1327 log.go:181] (0xc00003a420) Reply frame received for 5\nI0915 10:54:15.214022 1327 log.go:181] (0xc00003a420) Data frame received for 5\nI0915 10:54:15.214059 1327 log.go:181] (0xc000309ea0) (5) Data frame handling\nI0915 10:54:15.214090 1327 log.go:181] (0xc000309ea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 10:54:15.254815 1327 log.go:181] (0xc00003a420) Data frame received for 3\nI0915 10:54:15.254835 1327 log.go:181] (0xc0003095e0) (3) Data frame handling\nI0915 10:54:15.254843 1327 log.go:181] (0xc0003095e0) (3) Data frame sent\nI0915 10:54:15.254847 1327 log.go:181] (0xc00003a420) Data frame received for 3\nI0915 10:54:15.254852 1327 log.go:181] (0xc0003095e0) (3) Data frame handling\nI0915 10:54:15.255371 1327 log.go:181] (0xc00003a420) Data frame received for 5\nI0915 10:54:15.255411 1327 log.go:181] (0xc000309ea0) (5) Data frame handling\nI0915 10:54:15.257327 1327 log.go:181] (0xc00003a420) Data frame received for 1\nI0915 10:54:15.257358 1327 log.go:181] (0xc000309360) (1) Data frame handling\nI0915 10:54:15.257378 1327 log.go:181] (0xc000309360) (1) Data frame sent\nI0915 10:54:15.257401 1327 log.go:181] (0xc00003a420) (0xc000309360) Stream removed, broadcasting: 1\nI0915 10:54:15.257497 1327 log.go:181] (0xc00003a420) Go away received\nI0915 10:54:15.258004 1327 log.go:181] (0xc00003a420) (0xc000309360) Stream removed, broadcasting: 1\nI0915 10:54:15.258029 1327 log.go:181] (0xc00003a420) (0xc0003095e0) Stream removed, broadcasting: 3\nI0915 10:54:15.258042 1327 log.go:181] (0xc00003a420) (0xc000309ea0) Stream removed, broadcasting: 5\n" Sep 15 10:54:15.263: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 10:54:15.263: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 15 10:54:15.263: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 10:54:15.266: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Sep 15 10:54:25.276: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 15 10:54:25.277: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 15 10:54:25.277: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 15 10:54:25.295: INFO: POD NODE PHASE GRACE CONDITIONS Sep 15 10:54:25.295: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:43 +0000 UTC }] Sep 15 10:54:25.295: INFO: ss-1 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:25.295: INFO: ss-2 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:25.295: INFO: Sep 15 10:54:25.295: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 15 10:54:26.401: INFO: POD NODE PHASE GRACE CONDITIONS Sep 15 10:54:26.401: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:43 +0000 UTC }] Sep 15 10:54:26.401: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:26.401: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:26.401: INFO: Sep 15 10:54:26.401: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 15 10:54:27.496: INFO: POD NODE PHASE GRACE CONDITIONS Sep 15 10:54:27.496: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:53:43 +0000 UTC }] Sep 15 10:54:27.496: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:27.496: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:27.496: INFO: Sep 15 10:54:27.496: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 15 10:54:28.501: INFO: POD NODE PHASE GRACE CONDITIONS Sep 15 10:54:28.501: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:28.501: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:28.501: INFO: Sep 15 10:54:28.501: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 15 10:54:29.505: INFO: POD NODE PHASE GRACE CONDITIONS Sep 15 10:54:29.505: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:29.505: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:29.505: INFO: Sep 15 10:54:29.505: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 15 10:54:30.519: INFO: POD NODE PHASE GRACE CONDITIONS Sep 15 10:54:30.519: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:30.519: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:30.520: INFO: Sep 15 10:54:30.520: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 15 10:54:31.525: INFO: POD NODE PHASE GRACE CONDITIONS Sep 15 10:54:31.525: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:31.525: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:31.525: INFO: Sep 15 10:54:31.525: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 15 10:54:32.555: INFO: POD NODE PHASE GRACE CONDITIONS Sep 15 10:54:32.555: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:32.555: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-15 10:54:03 +0000 UTC }] Sep 15 10:54:32.555: INFO: Sep 15 10:54:32.555: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 15 10:54:33.873: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.726849455s Sep 15 10:54:34.877: INFO: Verifying statefulset ss doesn't scale past 0 for another 409.361594ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3861 Sep 15 10:54:35.882: INFO: Scaling statefulset ss to 0 Sep 15 10:54:35.894: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 15 10:54:35.897: INFO: Deleting all statefulset in ns statefulset-3861 Sep 15 10:54:35.899: INFO: Scaling statefulset ss to 0 Sep 15 10:54:35.908: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 10:54:35.910: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:54:35.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3861" for this suite. • [SLOW TEST:52.628 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":83,"skipped":1526,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:54:35.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 15 10:54:44.081: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 15 10:54:44.085: INFO: Pod pod-with-prestop-exec-hook still exists Sep 15 10:54:46.086: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 15 10:54:46.090: INFO: Pod pod-with-prestop-exec-hook still exists Sep 15 10:54:48.086: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 15 10:54:48.091: INFO: Pod pod-with-prestop-exec-hook still exists Sep 15 10:54:50.086: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 15 10:54:50.090: INFO: Pod pod-with-prestop-exec-hook still exists Sep 15 10:54:52.086: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 15 10:54:52.090: INFO: Pod pod-with-prestop-exec-hook still exists Sep 15 10:54:54.086: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 15 10:54:54.090: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:54:54.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5902" for this suite. • [SLOW TEST:18.159 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":84,"skipped":1536,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:54:54.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 15 10:54:54.193: INFO: Waiting up to 5m0s for pod "downward-api-270c72b4-24c6-47ce-9502-29a31341eeaf" in namespace "downward-api-5413" to be "Succeeded or Failed" Sep 15 10:54:54.204: INFO: Pod "downward-api-270c72b4-24c6-47ce-9502-29a31341eeaf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.512205ms Sep 15 10:54:56.209: INFO: Pod "downward-api-270c72b4-24c6-47ce-9502-29a31341eeaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016390456s Sep 15 10:54:58.214: INFO: Pod "downward-api-270c72b4-24c6-47ce-9502-29a31341eeaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021492178s STEP: Saw pod success Sep 15 10:54:58.215: INFO: Pod "downward-api-270c72b4-24c6-47ce-9502-29a31341eeaf" satisfied condition "Succeeded or Failed" Sep 15 10:54:58.218: INFO: Trying to get logs from node kali-worker pod downward-api-270c72b4-24c6-47ce-9502-29a31341eeaf container dapi-container: STEP: delete the pod Sep 15 10:54:58.372: INFO: Waiting for pod downward-api-270c72b4-24c6-47ce-9502-29a31341eeaf to disappear Sep 15 10:54:58.399: INFO: Pod downward-api-270c72b4-24c6-47ce-9502-29a31341eeaf no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:54:58.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5413" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":85,"skipped":1539,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:54:58.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 15 10:54:59.885: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 15 10:55:01.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764099, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764099, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764099, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764099, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 10:55:05.025: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:55:05.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:55:06.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8374" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.911 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":86,"skipped":1546,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:55:06.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-41cae881-9a81-41ca-83d9-14aa8c362757 STEP: Creating a pod to test consume secrets Sep 15 10:55:06.413: INFO: Waiting up to 5m0s for pod "pod-secrets-50051414-134a-4c37-8dbb-416874a4a995" in namespace "secrets-2072" to be "Succeeded or Failed" Sep 15 10:55:06.423: INFO: Pod "pod-secrets-50051414-134a-4c37-8dbb-416874a4a995": Phase="Pending", Reason="", readiness=false. Elapsed: 9.744704ms Sep 15 10:55:08.428: INFO: Pod "pod-secrets-50051414-134a-4c37-8dbb-416874a4a995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014957941s Sep 15 10:55:10.433: INFO: Pod "pod-secrets-50051414-134a-4c37-8dbb-416874a4a995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020215563s STEP: Saw pod success Sep 15 10:55:10.433: INFO: Pod "pod-secrets-50051414-134a-4c37-8dbb-416874a4a995" satisfied condition "Succeeded or Failed" Sep 15 10:55:10.436: INFO: Trying to get logs from node kali-worker pod pod-secrets-50051414-134a-4c37-8dbb-416874a4a995 container secret-env-test: STEP: delete the pod Sep 15 10:55:10.471: INFO: Waiting for pod pod-secrets-50051414-134a-4c37-8dbb-416874a4a995 to disappear Sep 15 10:55:10.507: INFO: Pod pod-secrets-50051414-134a-4c37-8dbb-416874a4a995 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:55:10.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2072" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":87,"skipped":1550,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:55:10.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:55:42.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-169" for this suite. STEP: Destroying namespace "nsdeletetest-103" for this suite. Sep 15 10:55:42.747: INFO: Namespace nsdeletetest-103 was already deleted STEP: Destroying namespace "nsdeletetest-1838" for this suite. • [SLOW TEST:32.235 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":88,"skipped":1566,"failed":0} [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:55:42.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-2971/configmap-test-9a53d50e-521a-4fa1-a44c-561c7d81fdce STEP: Creating a pod to test consume configMaps Sep 15 10:55:42.828: INFO: Waiting up to 5m0s for pod "pod-configmaps-85f5c641-29f6-4836-a16f-55cd71d515d0" in namespace "configmap-2971" to be "Succeeded or Failed" Sep 15 10:55:42.831: INFO: Pod "pod-configmaps-85f5c641-29f6-4836-a16f-55cd71d515d0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.611004ms Sep 15 10:55:44.836: INFO: Pod "pod-configmaps-85f5c641-29f6-4836-a16f-55cd71d515d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008183779s Sep 15 10:55:46.841: INFO: Pod "pod-configmaps-85f5c641-29f6-4836-a16f-55cd71d515d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012884309s STEP: Saw pod success Sep 15 10:55:46.841: INFO: Pod "pod-configmaps-85f5c641-29f6-4836-a16f-55cd71d515d0" satisfied condition "Succeeded or Failed" Sep 15 10:55:46.844: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-85f5c641-29f6-4836-a16f-55cd71d515d0 container env-test: STEP: delete the pod Sep 15 10:55:46.882: INFO: Waiting for pod pod-configmaps-85f5c641-29f6-4836-a16f-55cd71d515d0 to disappear Sep 15 10:55:46.896: INFO: Pod pod-configmaps-85f5c641-29f6-4836-a16f-55cd71d515d0 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:55:46.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2971" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":89,"skipped":1566,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:55:46.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 15 10:55:47.001: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 15 10:55:47.006: INFO: Waiting for terminating namespaces to be deleted... Sep 15 10:55:47.008: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 15 10:55:47.012: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:55:47.012: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 10:55:47.012: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:55:47.012: INFO: Container kube-proxy ready: true, restart count 0 Sep 15 10:55:47.012: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 15 10:55:47.016: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:55:47.016: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 10:55:47.016: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 10:55:47.016: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1634efb95d1bc15a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:55:48.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6838" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":90,"skipped":1586,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:55:48.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7916 STEP: creating service affinity-nodeport in namespace services-7916 STEP: creating replication controller affinity-nodeport in namespace services-7916 I0915 10:55:48.192129 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-7916, replica count: 3 I0915 10:55:51.242730 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 10:55:54.243008 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 10:55:54.254: INFO: Creating new exec pod Sep 15 10:55:59.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7916 execpod-affinitywgb54 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Sep 15 10:55:59.519: INFO: stderr: "I0915 10:55:59.420842 1346 log.go:181] (0xc00018c370) (0xc000720000) Create stream\nI0915 10:55:59.420914 1346 log.go:181] (0xc00018c370) (0xc000720000) Stream added, broadcasting: 1\nI0915 10:55:59.422334 1346 log.go:181] (0xc00018c370) Reply frame received for 1\nI0915 10:55:59.422374 1346 log.go:181] (0xc00018c370) (0xc0008a8000) Create stream\nI0915 10:55:59.422390 1346 log.go:181] (0xc00018c370) (0xc0008a8000) Stream added, broadcasting: 3\nI0915 10:55:59.423152 1346 log.go:181] (0xc00018c370) Reply frame received for 3\nI0915 10:55:59.423179 1346 log.go:181] (0xc00018c370) (0xc0007200a0) Create stream\nI0915 10:55:59.423188 1346 log.go:181] (0xc00018c370) (0xc0007200a0) Stream added, broadcasting: 5\nI0915 10:55:59.423956 1346 log.go:181] (0xc00018c370) Reply frame received for 5\nI0915 10:55:59.512808 1346 log.go:181] (0xc00018c370) Data frame received for 5\nI0915 10:55:59.512842 1346 log.go:181] (0xc0007200a0) (5) Data frame handling\nI0915 10:55:59.512863 1346 log.go:181] (0xc0007200a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0915 10:55:59.512988 1346 log.go:181] (0xc00018c370) Data frame received for 5\nI0915 10:55:59.513015 1346 log.go:181] (0xc0007200a0) (5) Data frame handling\nI0915 10:55:59.513037 1346 log.go:181] (0xc0007200a0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0915 10:55:59.513661 1346 log.go:181] (0xc00018c370) Data frame received for 5\nI0915 10:55:59.513690 1346 log.go:181] (0xc00018c370) Data frame received for 3\nI0915 10:55:59.513722 1346 log.go:181] (0xc0008a8000) (3) Data frame handling\nI0915 10:55:59.513746 1346 log.go:181] (0xc0007200a0) (5) Data frame handling\nI0915 10:55:59.515699 1346 log.go:181] (0xc00018c370) Data frame received for 1\nI0915 10:55:59.515718 1346 log.go:181] (0xc000720000) (1) Data frame handling\nI0915 10:55:59.515726 1346 log.go:181] (0xc000720000) (1) Data frame sent\nI0915 10:55:59.515748 1346 log.go:181] (0xc00018c370) (0xc000720000) Stream removed, broadcasting: 1\nI0915 10:55:59.515839 1346 log.go:181] (0xc00018c370) Go away received\nI0915 10:55:59.516080 1346 log.go:181] (0xc00018c370) (0xc000720000) Stream removed, broadcasting: 1\nI0915 10:55:59.516095 1346 log.go:181] (0xc00018c370) (0xc0008a8000) Stream removed, broadcasting: 3\nI0915 10:55:59.516102 1346 log.go:181] (0xc00018c370) (0xc0007200a0) Stream removed, broadcasting: 5\n" Sep 15 10:55:59.519: INFO: stdout: "" Sep 15 10:55:59.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7916 execpod-affinitywgb54 -- /bin/sh -x -c nc -zv -t -w 2 10.102.194.5 80' Sep 15 10:55:59.742: INFO: stderr: "I0915 10:55:59.651796 1365 log.go:181] (0xc0007cd970) (0xc0007c4c80) Create stream\nI0915 10:55:59.651849 1365 log.go:181] (0xc0007cd970) (0xc0007c4c80) Stream added, broadcasting: 1\nI0915 10:55:59.659232 1365 log.go:181] (0xc0007cd970) Reply frame received for 1\nI0915 10:55:59.659281 1365 log.go:181] (0xc0007cd970) (0xc000c72000) Create stream\nI0915 10:55:59.659295 1365 log.go:181] (0xc0007cd970) (0xc000c72000) Stream added, broadcasting: 3\nI0915 10:55:59.660460 1365 log.go:181] (0xc0007cd970) Reply frame received for 3\nI0915 10:55:59.660500 1365 log.go:181] (0xc0007cd970) (0xc000c720a0) Create stream\nI0915 10:55:59.660512 1365 log.go:181] (0xc0007cd970) (0xc000c720a0) Stream added, broadcasting: 5\nI0915 10:55:59.661578 1365 log.go:181] (0xc0007cd970) Reply frame received for 5\nI0915 10:55:59.737763 1365 log.go:181] (0xc0007cd970) Data frame received for 5\nI0915 10:55:59.737785 1365 log.go:181] (0xc000c720a0) (5) Data frame handling\nI0915 10:55:59.737792 1365 log.go:181] (0xc000c720a0) (5) Data frame sent\nI0915 10:55:59.737799 1365 log.go:181] (0xc0007cd970) Data frame received for 5\nI0915 10:55:59.737804 1365 log.go:181] (0xc000c720a0) (5) Data frame handling\nI0915 10:55:59.737812 1365 log.go:181] (0xc0007cd970) Data frame received for 3\nI0915 10:55:59.737817 1365 log.go:181] (0xc000c72000) (3) Data frame handling\n+ nc -zv -t -w 2 10.102.194.5 80\nConnection to 10.102.194.5 80 port [tcp/http] succeeded!\nI0915 10:55:59.738852 1365 log.go:181] (0xc0007cd970) Data frame received for 1\nI0915 10:55:59.738864 1365 log.go:181] (0xc0007c4c80) (1) Data frame handling\nI0915 10:55:59.738880 1365 log.go:181] (0xc0007c4c80) (1) Data frame sent\nI0915 10:55:59.738898 1365 log.go:181] (0xc0007cd970) (0xc0007c4c80) Stream removed, broadcasting: 1\nI0915 10:55:59.738919 1365 log.go:181] (0xc0007cd970) Go away received\nI0915 10:55:59.739228 1365 log.go:181] (0xc0007cd970) (0xc0007c4c80) Stream removed, broadcasting: 1\nI0915 10:55:59.739241 1365 log.go:181] (0xc0007cd970) (0xc000c72000) Stream removed, broadcasting: 3\nI0915 10:55:59.739247 1365 log.go:181] (0xc0007cd970) (0xc000c720a0) Stream removed, broadcasting: 5\n" Sep 15 10:55:59.742: INFO: stdout: "" Sep 15 10:55:59.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7916 execpod-affinitywgb54 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30332' Sep 15 10:55:59.954: INFO: stderr: "I0915 10:55:59.869770 1383 log.go:181] (0xc000d071e0) (0xc000141b80) Create stream\nI0915 10:55:59.869827 1383 log.go:181] (0xc000d071e0) (0xc000141b80) Stream added, broadcasting: 1\nI0915 10:55:59.874553 1383 log.go:181] (0xc000d071e0) Reply frame received for 1\nI0915 10:55:59.874592 1383 log.go:181] (0xc000d071e0) (0xc000140280) Create stream\nI0915 10:55:59.874604 1383 log.go:181] (0xc000d071e0) (0xc000140280) Stream added, broadcasting: 3\nI0915 10:55:59.875587 1383 log.go:181] (0xc000d071e0) Reply frame received for 3\nI0915 10:55:59.875619 1383 log.go:181] (0xc000d071e0) (0xc0003cb9a0) Create stream\nI0915 10:55:59.875631 1383 log.go:181] (0xc000d071e0) (0xc0003cb9a0) Stream added, broadcasting: 5\nI0915 10:55:59.876510 1383 log.go:181] (0xc000d071e0) Reply frame received for 5\nI0915 10:55:59.948537 1383 log.go:181] (0xc000d071e0) Data frame received for 5\nI0915 10:55:59.948575 1383 log.go:181] (0xc0003cb9a0) (5) Data frame handling\nI0915 10:55:59.948601 1383 log.go:181] (0xc0003cb9a0) (5) Data frame sent\nI0915 10:55:59.948613 1383 log.go:181] (0xc000d071e0) Data frame received for 5\nI0915 10:55:59.948624 1383 log.go:181] (0xc0003cb9a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30332\nConnection to 172.18.0.11 30332 port [tcp/30332] succeeded!\nI0915 10:55:59.948649 1383 log.go:181] (0xc0003cb9a0) (5) Data frame sent\nI0915 10:55:59.948762 1383 log.go:181] (0xc000d071e0) Data frame received for 5\nI0915 10:55:59.948781 1383 log.go:181] (0xc0003cb9a0) (5) Data frame handling\nI0915 10:55:59.948946 1383 log.go:181] (0xc000d071e0) Data frame received for 3\nI0915 10:55:59.948960 1383 log.go:181] (0xc000140280) (3) Data frame handling\nI0915 10:55:59.950222 1383 log.go:181] (0xc000d071e0) Data frame received for 1\nI0915 10:55:59.950235 1383 log.go:181] (0xc000141b80) (1) Data frame handling\nI0915 10:55:59.950243 1383 log.go:181] (0xc000141b80) (1) Data frame sent\nI0915 10:55:59.950251 1383 log.go:181] (0xc000d071e0) (0xc000141b80) Stream removed, broadcasting: 1\nI0915 10:55:59.950260 1383 log.go:181] (0xc000d071e0) Go away received\nI0915 10:55:59.950677 1383 log.go:181] (0xc000d071e0) (0xc000141b80) Stream removed, broadcasting: 1\nI0915 10:55:59.950699 1383 log.go:181] (0xc000d071e0) (0xc000140280) Stream removed, broadcasting: 3\nI0915 10:55:59.950711 1383 log.go:181] (0xc000d071e0) (0xc0003cb9a0) Stream removed, broadcasting: 5\n" Sep 15 10:55:59.954: INFO: stdout: "" Sep 15 10:55:59.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7916 execpod-affinitywgb54 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30332' Sep 15 10:56:00.165: INFO: stderr: "I0915 10:56:00.089874 1401 log.go:181] (0xc000e97130) (0xc000151180) Create stream\nI0915 10:56:00.089943 1401 log.go:181] (0xc000e97130) (0xc000151180) Stream added, broadcasting: 1\nI0915 10:56:00.095112 1401 log.go:181] (0xc000e97130) Reply frame received for 1\nI0915 10:56:00.095161 1401 log.go:181] (0xc000e97130) (0xc000150140) Create stream\nI0915 10:56:00.095175 1401 log.go:181] (0xc000e97130) (0xc000150140) Stream added, broadcasting: 3\nI0915 10:56:00.096241 1401 log.go:181] (0xc000e97130) Reply frame received for 3\nI0915 10:56:00.096283 1401 log.go:181] (0xc000e97130) (0xc0003cbcc0) Create stream\nI0915 10:56:00.096295 1401 log.go:181] (0xc000e97130) (0xc0003cbcc0) Stream added, broadcasting: 5\nI0915 10:56:00.097283 1401 log.go:181] (0xc000e97130) Reply frame received for 5\nI0915 10:56:00.157978 1401 log.go:181] (0xc000e97130) Data frame received for 3\nI0915 10:56:00.158030 1401 log.go:181] (0xc000150140) (3) Data frame handling\nI0915 10:56:00.158131 1401 log.go:181] (0xc000e97130) Data frame received for 5\nI0915 10:56:00.158169 1401 log.go:181] (0xc0003cbcc0) (5) Data frame handling\nI0915 10:56:00.158196 1401 log.go:181] (0xc0003cbcc0) (5) Data frame sent\nI0915 10:56:00.158208 1401 log.go:181] (0xc000e97130) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.12 30332\nConnection to 172.18.0.12 30332 port [tcp/30332] succeeded!\nI0915 10:56:00.158219 1401 log.go:181] (0xc0003cbcc0) (5) Data frame handling\nI0915 10:56:00.159523 1401 log.go:181] (0xc000e97130) Data frame received for 1\nI0915 10:56:00.159559 1401 log.go:181] (0xc000151180) (1) Data frame handling\nI0915 10:56:00.159591 1401 log.go:181] (0xc000151180) (1) Data frame sent\nI0915 10:56:00.159829 1401 log.go:181] (0xc000e97130) (0xc000151180) Stream removed, broadcasting: 1\nI0915 10:56:00.159979 1401 log.go:181] (0xc000e97130) Go away received\nI0915 10:56:00.160416 1401 log.go:181] (0xc000e97130) (0xc000151180) Stream removed, broadcasting: 1\nI0915 10:56:00.160439 1401 log.go:181] (0xc000e97130) (0xc000150140) Stream removed, broadcasting: 3\nI0915 10:56:00.160452 1401 log.go:181] (0xc000e97130) (0xc0003cbcc0) Stream removed, broadcasting: 5\n" Sep 15 10:56:00.165: INFO: stdout: "" Sep 15 10:56:00.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7916 execpod-affinitywgb54 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30332/ ; done' Sep 15 10:56:00.478: INFO: stderr: "I0915 10:56:00.304841 1419 log.go:181] (0xc000d2efd0) (0xc000176aa0) Create stream\nI0915 10:56:00.304883 1419 log.go:181] (0xc000d2efd0) (0xc000176aa0) Stream added, broadcasting: 1\nI0915 10:56:00.310500 1419 log.go:181] (0xc000d2efd0) Reply frame received for 1\nI0915 10:56:00.310541 1419 log.go:181] (0xc000d2efd0) (0xc00031de00) Create stream\nI0915 10:56:00.310551 1419 log.go:181] (0xc000d2efd0) (0xc00031de00) Stream added, broadcasting: 3\nI0915 10:56:00.311443 1419 log.go:181] (0xc000d2efd0) Reply frame received for 3\nI0915 10:56:00.311481 1419 log.go:181] (0xc000d2efd0) (0xc000177540) Create stream\nI0915 10:56:00.311498 1419 log.go:181] (0xc000d2efd0) (0xc000177540) Stream added, broadcasting: 5\nI0915 10:56:00.312420 1419 log.go:181] (0xc000d2efd0) Reply frame received for 5\nI0915 10:56:00.367489 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.367521 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.367532 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.367548 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.367555 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.367566 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.372802 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.372821 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.372835 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.373406 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.373425 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.373434 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.373467 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.373488 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.373500 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.379072 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.379103 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.379180 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.379730 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.379743 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.379749 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.379816 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.379824 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.379833 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.386329 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.386353 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.386364 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.386872 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.386904 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.386919 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.386942 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.386953 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.386966 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.390460 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.390487 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.390505 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.391068 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.391081 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.391087 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.391119 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.391139 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.391166 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.396364 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.396386 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.396408 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.396960 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.396985 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.397005 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.397048 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.397073 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.397096 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.401570 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.401586 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.401596 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.402311 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.402338 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.402362 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.402411 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.402433 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.402453 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.407800 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.407833 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.407856 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.408871 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.408901 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.408939 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.408960 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.408980 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.408998 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.414088 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.414114 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.414132 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.414682 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.414711 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.414741 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.414757 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.414777 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.414791 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.421911 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.421937 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.421964 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.422940 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.422967 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.422981 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.423067 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.423089 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.423109 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.427532 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.427560 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.427579 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.428495 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.428522 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.428535 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.428557 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.428568 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.428580 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.435312 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.435334 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.435358 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.435927 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.435954 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.435967 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.436003 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.436025 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.436046 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.442752 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.442775 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.442794 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.443341 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.443366 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.443380 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.443401 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.443413 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.443424 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.449153 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.449177 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.449195 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.449870 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.449899 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.449926 1419 log.go:181] (0xc000177540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.449950 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.449978 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.450008 1419 log.go:181] (0xc000177540) (5) Data frame sent\nI0915 10:56:00.454716 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.454744 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.454760 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.455345 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.455369 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.455382 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.455400 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.455413 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.455425 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.462066 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.462085 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.462100 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.462652 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.462679 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.462692 1419 log.go:181] (0xc000177540) (5) Data frame sent\n+ echo\n+ curl -qI0915 10:56:00.462710 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.462739 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.462752 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.462769 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.462780 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.462791 1419 log.go:181] (0xc000177540) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.11:30332/\nI0915 10:56:00.469988 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.470010 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.470022 1419 log.go:181] (0xc00031de00) (3) Data frame sent\nI0915 10:56:00.471025 1419 log.go:181] (0xc000d2efd0) Data frame received for 3\nI0915 10:56:00.471064 1419 log.go:181] (0xc00031de00) (3) Data frame handling\nI0915 10:56:00.471094 1419 log.go:181] (0xc000d2efd0) Data frame received for 5\nI0915 10:56:00.471115 1419 log.go:181] (0xc000177540) (5) Data frame handling\nI0915 10:56:00.473269 1419 log.go:181] (0xc000d2efd0) Data frame received for 1\nI0915 10:56:00.473291 1419 log.go:181] (0xc000176aa0) (1) Data frame handling\nI0915 10:56:00.473306 1419 log.go:181] (0xc000176aa0) (1) Data frame sent\nI0915 10:56:00.473319 1419 log.go:181] (0xc000d2efd0) (0xc000176aa0) Stream removed, broadcasting: 1\nI0915 10:56:00.473331 1419 log.go:181] (0xc000d2efd0) Go away received\nI0915 10:56:00.473867 1419 log.go:181] (0xc000d2efd0) (0xc000176aa0) Stream removed, broadcasting: 1\nI0915 10:56:00.473892 1419 log.go:181] (0xc000d2efd0) (0xc00031de00) Stream removed, broadcasting: 3\nI0915 10:56:00.473904 1419 log.go:181] (0xc000d2efd0) (0xc000177540) Stream removed, broadcasting: 5\n" Sep 15 10:56:00.479: INFO: stdout: "\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb\naffinity-nodeport-lrxmb" Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Received response from host: affinity-nodeport-lrxmb Sep 15 10:56:00.479: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-7916, will wait for the garbage collector to delete the pods Sep 15 10:56:00.588: INFO: Deleting ReplicationController affinity-nodeport took: 5.45435ms Sep 15 10:56:01.088: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.244736ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:56:13.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7916" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:25.318 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":91,"skipped":1603,"failed":0} [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:56:13.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 15 10:56:13.833: INFO: starting watch STEP: patching STEP: updating Sep 15 10:56:13.857: INFO: waiting for watch events with expected annotations Sep 15 10:56:13.857: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:56:14.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-1750" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":92,"skipped":1603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:56:14.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:58:14.175: INFO: Deleting pod "var-expansion-6ad380f4-6b76-4014-b1a5-e1990290d01d" in namespace "var-expansion-7875" Sep 15 10:58:14.182: INFO: Wait up to 5m0s for pod "var-expansion-6ad380f4-6b76-4014-b1a5-e1990290d01d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:58:16.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7875" for this suite. • [SLOW TEST:122.160 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":93,"skipped":1641,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:58:16.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9736 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-9736 Sep 15 10:58:16.363: INFO: Found 0 stateful pods, waiting for 1 Sep 15 10:58:26.368: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 15 10:58:26.406: INFO: Deleting all statefulset in ns statefulset-9736 Sep 15 10:58:26.424: INFO: Scaling statefulset ss to 0 Sep 15 10:58:36.483: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 10:58:36.486: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:58:36.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9736" for this suite. • [SLOW TEST:20.286 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":94,"skipped":1642,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:58:36.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 15 10:58:36.603: INFO: Waiting up to 5m0s for pod "pod-31f765b2-407c-40a3-8c6e-37bffdafad6b" in namespace "emptydir-429" to be "Succeeded or Failed" Sep 15 10:58:36.606: INFO: Pod "pod-31f765b2-407c-40a3-8c6e-37bffdafad6b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.516977ms Sep 15 10:58:38.612: INFO: Pod "pod-31f765b2-407c-40a3-8c6e-37bffdafad6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009378449s Sep 15 10:58:40.616: INFO: Pod "pod-31f765b2-407c-40a3-8c6e-37bffdafad6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013673481s STEP: Saw pod success Sep 15 10:58:40.616: INFO: Pod "pod-31f765b2-407c-40a3-8c6e-37bffdafad6b" satisfied condition "Succeeded or Failed" Sep 15 10:58:40.619: INFO: Trying to get logs from node kali-worker pod pod-31f765b2-407c-40a3-8c6e-37bffdafad6b container test-container: STEP: delete the pod Sep 15 10:58:40.763: INFO: Waiting for pod pod-31f765b2-407c-40a3-8c6e-37bffdafad6b to disappear Sep 15 10:58:40.782: INFO: Pod pod-31f765b2-407c-40a3-8c6e-37bffdafad6b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:58:40.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-429" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":95,"skipped":1656,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:58:40.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 10:58:40.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2928370f-b7cc-45b9-87a8-39b5d08fe8aa" in namespace "projected-2576" to be "Succeeded or Failed" Sep 15 10:58:40.859: INFO: Pod "downwardapi-volume-2928370f-b7cc-45b9-87a8-39b5d08fe8aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.843749ms Sep 15 10:58:42.905: INFO: Pod "downwardapi-volume-2928370f-b7cc-45b9-87a8-39b5d08fe8aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050714572s Sep 15 10:58:44.910: INFO: Pod "downwardapi-volume-2928370f-b7cc-45b9-87a8-39b5d08fe8aa": Phase="Running", Reason="", readiness=true. Elapsed: 4.055530881s Sep 15 10:58:46.915: INFO: Pod "downwardapi-volume-2928370f-b7cc-45b9-87a8-39b5d08fe8aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06044469s STEP: Saw pod success Sep 15 10:58:46.915: INFO: Pod "downwardapi-volume-2928370f-b7cc-45b9-87a8-39b5d08fe8aa" satisfied condition "Succeeded or Failed" Sep 15 10:58:46.919: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-2928370f-b7cc-45b9-87a8-39b5d08fe8aa container client-container: STEP: delete the pod Sep 15 10:58:46.958: INFO: Waiting for pod downwardapi-volume-2928370f-b7cc-45b9-87a8-39b5d08fe8aa to disappear Sep 15 10:58:46.969: INFO: Pod downwardapi-volume-2928370f-b7cc-45b9-87a8-39b5d08fe8aa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:58:46.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2576" for this suite. • [SLOW TEST:6.189 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":96,"skipped":1657,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:58:46.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 10:58:47.088: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Pending, waiting for it to be Running (with Ready = true) Sep 15 10:58:49.093: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Pending, waiting for it to be Running (with Ready = true) Sep 15 10:58:51.092: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Running (Ready = false) Sep 15 10:58:53.093: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Running (Ready = false) Sep 15 10:58:55.093: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Running (Ready = false) Sep 15 10:58:57.093: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Running (Ready = false) Sep 15 10:58:59.093: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Running (Ready = false) Sep 15 10:59:01.093: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Running (Ready = false) Sep 15 10:59:03.097: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Running (Ready = false) Sep 15 10:59:05.092: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Running (Ready = false) Sep 15 10:59:07.092: INFO: The status of Pod test-webserver-6992b442-c834-4d85-84fe-81c59b49d166 is Running (Ready = true) Sep 15 10:59:07.095: INFO: Container started at 2020-09-15 10:58:49 +0000 UTC, pod became ready at 2020-09-15 10:59:05 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:59:07.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7592" for this suite. • [SLOW TEST:20.125 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":97,"skipped":1670,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:59:07.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Sep 15 10:59:07.172: INFO: >>> kubeConfig: /root/.kube/config Sep 15 10:59:10.169: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:59:21.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6977" for this suite. • [SLOW TEST:14.019 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":98,"skipped":1682,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:59:21.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:59:21.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5912" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":99,"skipped":1684,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:59:21.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-45f81bcf-ac0f-464b-bfa4-3a440504e6bd STEP: Creating a pod to test consume configMaps Sep 15 10:59:21.431: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d625c1f5-f0d2-4b16-970c-fb96698df374" in namespace "projected-723" to be "Succeeded or Failed" Sep 15 10:59:21.445: INFO: Pod "pod-projected-configmaps-d625c1f5-f0d2-4b16-970c-fb96698df374": Phase="Pending", Reason="", readiness=false. Elapsed: 14.382398ms Sep 15 10:59:23.481: INFO: Pod "pod-projected-configmaps-d625c1f5-f0d2-4b16-970c-fb96698df374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05007573s Sep 15 10:59:25.485: INFO: Pod "pod-projected-configmaps-d625c1f5-f0d2-4b16-970c-fb96698df374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054614471s STEP: Saw pod success Sep 15 10:59:25.485: INFO: Pod "pod-projected-configmaps-d625c1f5-f0d2-4b16-970c-fb96698df374" satisfied condition "Succeeded or Failed" Sep 15 10:59:25.489: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-d625c1f5-f0d2-4b16-970c-fb96698df374 container projected-configmap-volume-test: STEP: delete the pod Sep 15 10:59:25.566: INFO: Waiting for pod pod-projected-configmaps-d625c1f5-f0d2-4b16-970c-fb96698df374 to disappear Sep 15 10:59:25.575: INFO: Pod pod-projected-configmaps-d625c1f5-f0d2-4b16-970c-fb96698df374 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:59:25.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-723" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1697,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:59:25.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-92fd8121-cc95-4b66-9753-f1fa7ac7f2f1 STEP: Creating a pod to test consume configMaps Sep 15 10:59:25.686: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4ee9cee-50c0-44f5-83ba-a3ea4c4b2afd" in namespace "projected-715" to be "Succeeded or Failed" Sep 15 10:59:25.703: INFO: Pod "pod-projected-configmaps-c4ee9cee-50c0-44f5-83ba-a3ea4c4b2afd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.507312ms Sep 15 10:59:27.708: INFO: Pod "pod-projected-configmaps-c4ee9cee-50c0-44f5-83ba-a3ea4c4b2afd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021893898s Sep 15 10:59:29.712: INFO: Pod "pod-projected-configmaps-c4ee9cee-50c0-44f5-83ba-a3ea4c4b2afd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025889196s STEP: Saw pod success Sep 15 10:59:29.712: INFO: Pod "pod-projected-configmaps-c4ee9cee-50c0-44f5-83ba-a3ea4c4b2afd" satisfied condition "Succeeded or Failed" Sep 15 10:59:29.714: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-c4ee9cee-50c0-44f5-83ba-a3ea4c4b2afd container projected-configmap-volume-test: STEP: delete the pod Sep 15 10:59:29.866: INFO: Waiting for pod pod-projected-configmaps-c4ee9cee-50c0-44f5-83ba-a3ea4c4b2afd to disappear Sep 15 10:59:29.911: INFO: Pod pod-projected-configmaps-c4ee9cee-50c0-44f5-83ba-a3ea4c4b2afd no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 10:59:29.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-715" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":101,"skipped":1697,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 10:59:29.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-398 Sep 15 10:59:34.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-398 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 15 10:59:34.319: INFO: stderr: "I0915 10:59:34.215216 1437 log.go:181] (0xc000748000) (0xc000d18000) Create stream\nI0915 10:59:34.215288 1437 log.go:181] (0xc000748000) (0xc000d18000) Stream added, broadcasting: 1\nI0915 10:59:34.216850 1437 log.go:181] (0xc000748000) Reply frame received for 1\nI0915 10:59:34.216877 1437 log.go:181] (0xc000748000) (0xc0008ce140) Create stream\nI0915 10:59:34.216887 1437 log.go:181] (0xc000748000) (0xc0008ce140) Stream added, broadcasting: 3\nI0915 10:59:34.217727 1437 log.go:181] (0xc000748000) Reply frame received for 3\nI0915 10:59:34.217785 1437 log.go:181] (0xc000748000) (0xc000325040) Create stream\nI0915 10:59:34.217802 1437 log.go:181] (0xc000748000) (0xc000325040) Stream added, broadcasting: 5\nI0915 10:59:34.218635 1437 log.go:181] (0xc000748000) Reply frame received for 5\nI0915 10:59:34.306017 1437 log.go:181] (0xc000748000) Data frame received for 5\nI0915 10:59:34.306042 1437 log.go:181] (0xc000325040) (5) Data frame handling\nI0915 10:59:34.306053 1437 log.go:181] (0xc000325040) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0915 10:59:34.311969 1437 log.go:181] (0xc000748000) Data frame received for 3\nI0915 10:59:34.311984 1437 log.go:181] (0xc0008ce140) (3) Data frame handling\nI0915 10:59:34.311991 1437 log.go:181] (0xc0008ce140) (3) Data frame sent\nI0915 10:59:34.312650 1437 log.go:181] (0xc000748000) Data frame received for 5\nI0915 10:59:34.312677 1437 log.go:181] (0xc000325040) (5) Data frame handling\nI0915 10:59:34.312703 1437 log.go:181] (0xc000748000) Data frame received for 3\nI0915 10:59:34.312718 1437 log.go:181] (0xc0008ce140) (3) Data frame handling\nI0915 10:59:34.315261 1437 log.go:181] (0xc000748000) Data frame received for 1\nI0915 10:59:34.315275 1437 log.go:181] (0xc000d18000) (1) Data frame handling\nI0915 10:59:34.315288 1437 log.go:181] (0xc000d18000) (1) Data frame sent\nI0915 10:59:34.315297 1437 log.go:181] (0xc000748000) (0xc000d18000) Stream removed, broadcasting: 1\nI0915 10:59:34.315306 1437 log.go:181] (0xc000748000) Go away received\nI0915 10:59:34.315622 1437 log.go:181] (0xc000748000) (0xc000d18000) Stream removed, broadcasting: 1\nI0915 10:59:34.315641 1437 log.go:181] (0xc000748000) (0xc0008ce140) Stream removed, broadcasting: 3\nI0915 10:59:34.315650 1437 log.go:181] (0xc000748000) (0xc000325040) Stream removed, broadcasting: 5\n" Sep 15 10:59:34.319: INFO: stdout: "iptables" Sep 15 10:59:34.319: INFO: proxyMode: iptables Sep 15 10:59:34.324: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 10:59:34.355: INFO: Pod kube-proxy-mode-detector still exists Sep 15 10:59:36.356: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 10:59:36.360: INFO: Pod kube-proxy-mode-detector still exists Sep 15 10:59:38.356: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 10:59:38.361: INFO: Pod kube-proxy-mode-detector still exists Sep 15 10:59:40.356: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 10:59:40.360: INFO: Pod kube-proxy-mode-detector still exists Sep 15 10:59:42.356: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 10:59:42.360: INFO: Pod kube-proxy-mode-detector still exists Sep 15 10:59:44.356: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 10:59:44.360: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-398 STEP: creating replication controller affinity-clusterip-timeout in namespace services-398 I0915 10:59:44.421523 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-398, replica count: 3 I0915 10:59:47.471975 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 10:59:50.472286 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 10:59:50.478: INFO: Creating new exec pod Sep 15 10:59:55.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-398 execpod-affinityszrzf -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Sep 15 10:59:55.722: INFO: stderr: "I0915 10:59:55.628295 1455 log.go:181] (0xc000f20f20) (0xc0007b85a0) Create stream\nI0915 10:59:55.628353 1455 log.go:181] (0xc000f20f20) (0xc0007b85a0) Stream added, broadcasting: 1\nI0915 10:59:55.633977 1455 log.go:181] (0xc000f20f20) Reply frame received for 1\nI0915 10:59:55.634022 1455 log.go:181] (0xc000f20f20) (0xc0007b8000) Create stream\nI0915 10:59:55.634033 1455 log.go:181] (0xc000f20f20) (0xc0007b8000) Stream added, broadcasting: 3\nI0915 10:59:55.634966 1455 log.go:181] (0xc000f20f20) Reply frame received for 3\nI0915 10:59:55.635015 1455 log.go:181] (0xc000f20f20) (0xc00055e000) Create stream\nI0915 10:59:55.635039 1455 log.go:181] (0xc000f20f20) (0xc00055e000) Stream added, broadcasting: 5\nI0915 10:59:55.636038 1455 log.go:181] (0xc000f20f20) Reply frame received for 5\nI0915 10:59:55.713224 1455 log.go:181] (0xc000f20f20) Data frame received for 5\nI0915 10:59:55.713267 1455 log.go:181] (0xc00055e000) (5) Data frame handling\nI0915 10:59:55.713303 1455 log.go:181] (0xc00055e000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0915 10:59:55.713871 1455 log.go:181] (0xc000f20f20) Data frame received for 5\nI0915 10:59:55.713902 1455 log.go:181] (0xc00055e000) (5) Data frame handling\nI0915 10:59:55.713925 1455 log.go:181] (0xc00055e000) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0915 10:59:55.714403 1455 log.go:181] (0xc000f20f20) Data frame received for 5\nI0915 10:59:55.714424 1455 log.go:181] (0xc00055e000) (5) Data frame handling\nI0915 10:59:55.714440 1455 log.go:181] (0xc000f20f20) Data frame received for 3\nI0915 10:59:55.714459 1455 log.go:181] (0xc0007b8000) (3) Data frame handling\nI0915 10:59:55.717769 1455 log.go:181] (0xc000f20f20) Data frame received for 1\nI0915 10:59:55.717802 1455 log.go:181] (0xc0007b85a0) (1) Data frame handling\nI0915 10:59:55.717817 1455 log.go:181] (0xc0007b85a0) (1) Data frame sent\nI0915 10:59:55.717852 1455 log.go:181] (0xc000f20f20) (0xc0007b85a0) Stream removed, broadcasting: 1\nI0915 10:59:55.717902 1455 log.go:181] (0xc000f20f20) Go away received\nI0915 10:59:55.718439 1455 log.go:181] (0xc000f20f20) (0xc0007b85a0) Stream removed, broadcasting: 1\nI0915 10:59:55.718470 1455 log.go:181] (0xc000f20f20) (0xc0007b8000) Stream removed, broadcasting: 3\nI0915 10:59:55.718483 1455 log.go:181] (0xc000f20f20) (0xc00055e000) Stream removed, broadcasting: 5\n" Sep 15 10:59:55.722: INFO: stdout: "" Sep 15 10:59:55.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-398 execpod-affinityszrzf -- /bin/sh -x -c nc -zv -t -w 2 10.97.51.88 80' Sep 15 10:59:55.927: INFO: stderr: "I0915 10:59:55.852036 1473 log.go:181] (0xc000028000) (0xc0005b2000) Create stream\nI0915 10:59:55.852106 1473 log.go:181] (0xc000028000) (0xc0005b2000) Stream added, broadcasting: 1\nI0915 10:59:55.854093 1473 log.go:181] (0xc000028000) Reply frame received for 1\nI0915 10:59:55.854151 1473 log.go:181] (0xc000028000) (0xc0009f4780) Create stream\nI0915 10:59:55.854173 1473 log.go:181] (0xc000028000) (0xc0009f4780) Stream added, broadcasting: 3\nI0915 10:59:55.855141 1473 log.go:181] (0xc000028000) Reply frame received for 3\nI0915 10:59:55.855186 1473 log.go:181] (0xc000028000) (0xc000308960) Create stream\nI0915 10:59:55.855196 1473 log.go:181] (0xc000028000) (0xc000308960) Stream added, broadcasting: 5\nI0915 10:59:55.856236 1473 log.go:181] (0xc000028000) Reply frame received for 5\nI0915 10:59:55.921689 1473 log.go:181] (0xc000028000) Data frame received for 3\nI0915 10:59:55.921723 1473 log.go:181] (0xc0009f4780) (3) Data frame handling\nI0915 10:59:55.921742 1473 log.go:181] (0xc000028000) Data frame received for 5\nI0915 10:59:55.921749 1473 log.go:181] (0xc000308960) (5) Data frame handling\nI0915 10:59:55.921758 1473 log.go:181] (0xc000308960) (5) Data frame sent\nI0915 10:59:55.921765 1473 log.go:181] (0xc000028000) Data frame received for 5\nI0915 10:59:55.921772 1473 log.go:181] (0xc000308960) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.51.88 80\nConnection to 10.97.51.88 80 port [tcp/http] succeeded!\nI0915 10:59:55.922891 1473 log.go:181] (0xc000028000) Data frame received for 1\nI0915 10:59:55.922904 1473 log.go:181] (0xc0005b2000) (1) Data frame handling\nI0915 10:59:55.922913 1473 log.go:181] (0xc0005b2000) (1) Data frame sent\nI0915 10:59:55.923017 1473 log.go:181] (0xc000028000) (0xc0005b2000) Stream removed, broadcasting: 1\nI0915 10:59:55.923045 1473 log.go:181] (0xc000028000) Go away received\nI0915 10:59:55.923370 1473 log.go:181] (0xc000028000) (0xc0005b2000) Stream removed, broadcasting: 1\nI0915 10:59:55.923394 1473 log.go:181] (0xc000028000) (0xc0009f4780) Stream removed, broadcasting: 3\nI0915 10:59:55.923411 1473 log.go:181] (0xc000028000) (0xc000308960) Stream removed, broadcasting: 5\n" Sep 15 10:59:55.927: INFO: stdout: "" Sep 15 10:59:55.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-398 execpod-affinityszrzf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.51.88:80/ ; done' Sep 15 10:59:56.232: INFO: stderr: "I0915 10:59:56.048333 1491 log.go:181] (0xc000b58e70) (0xc0005c6140) Create stream\nI0915 10:59:56.048397 1491 log.go:181] (0xc000b58e70) (0xc0005c6140) Stream added, broadcasting: 1\nI0915 10:59:56.053659 1491 log.go:181] (0xc000b58e70) Reply frame received for 1\nI0915 10:59:56.053700 1491 log.go:181] (0xc000b58e70) (0xc0005c6dc0) Create stream\nI0915 10:59:56.053712 1491 log.go:181] (0xc000b58e70) (0xc0005c6dc0) Stream added, broadcasting: 3\nI0915 10:59:56.054513 1491 log.go:181] (0xc000b58e70) Reply frame received for 3\nI0915 10:59:56.054536 1491 log.go:181] (0xc000b58e70) (0xc000b2a5a0) Create stream\nI0915 10:59:56.054545 1491 log.go:181] (0xc000b58e70) (0xc000b2a5a0) Stream added, broadcasting: 5\nI0915 10:59:56.055398 1491 log.go:181] (0xc000b58e70) Reply frame received for 5\nI0915 10:59:56.118797 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.118841 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.118872 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.118931 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.118962 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.118981 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.126499 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.126521 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.126533 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.126942 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.126972 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.126986 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.127000 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.127012 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.127030 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.134854 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.134885 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.134913 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.135830 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.135846 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.135859 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.135905 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.135933 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.135950 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.139140 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.139152 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.139158 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.139558 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.139570 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.139576 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.139583 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.139588 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.139597 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.147169 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.147192 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.147212 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.147695 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.147714 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.147728 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\nI0915 10:59:56.147739 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.147748 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.147792 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\nI0915 10:59:56.147834 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.147848 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.147855 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.152116 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.152130 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.152232 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.152802 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.152829 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.152851 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.152863 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.152877 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.152888 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.160197 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.160212 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.160225 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.160993 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.161005 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.161011 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\nI0915 10:59:56.161015 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.161019 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.161032 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\nI0915 10:59:56.161119 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.161128 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.161133 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.165956 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.165986 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.166014 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.166773 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.166809 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.166831 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\nI0915 10:59:56.166848 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.166865 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.166891 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\nI0915 10:59:56.166910 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.166927 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.166944 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.170737 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.170758 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.170772 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.171586 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.171618 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.171632 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.171651 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.171662 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.171674 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.175223 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.175253 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.175268 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.175697 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.175717 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.175733 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.175755 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.175787 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.175811 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.181226 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.181246 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.181277 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.181980 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.182009 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.182024 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.182046 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.182060 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.182072 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.186328 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.186346 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.186357 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.186981 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.187009 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.187023 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.187046 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.187058 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.187071 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.194405 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.194427 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.194443 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.195054 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.195083 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.195126 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.195156 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.195177 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.195190 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.201242 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.201257 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.201271 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.202032 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.202070 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.202114 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.202150 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.202174 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.202190 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.209499 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.209521 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.209536 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.210496 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.210528 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.210542 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.210578 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.210584 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.210591 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.214783 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.214815 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.214861 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.215275 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.215292 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.215298 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n+ echo\n+ curl -q -sI0915 10:59:56.215308 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.215338 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.215349 1491 log.go:181] (0xc000b2a5a0) (5) Data frame sent\n --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.215461 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.215482 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.215497 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.222076 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.222093 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.222101 1491 log.go:181] (0xc0005c6dc0) (3) Data frame sent\nI0915 10:59:56.222578 1491 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 10:59:56.222595 1491 log.go:181] (0xc0005c6dc0) (3) Data frame handling\nI0915 10:59:56.222621 1491 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 10:59:56.222641 1491 log.go:181] (0xc000b2a5a0) (5) Data frame handling\nI0915 10:59:56.224482 1491 log.go:181] (0xc000b58e70) Data frame received for 1\nI0915 10:59:56.224504 1491 log.go:181] (0xc0005c6140) (1) Data frame handling\nI0915 10:59:56.224521 1491 log.go:181] (0xc0005c6140) (1) Data frame sent\nI0915 10:59:56.227229 1491 log.go:181] (0xc000b58e70) (0xc0005c6140) Stream removed, broadcasting: 1\nI0915 10:59:56.227296 1491 log.go:181] (0xc000b58e70) Go away received\nI0915 10:59:56.227646 1491 log.go:181] (0xc000b58e70) (0xc0005c6140) Stream removed, broadcasting: 1\nI0915 10:59:56.227659 1491 log.go:181] (0xc000b58e70) (0xc0005c6dc0) Stream removed, broadcasting: 3\nI0915 10:59:56.227664 1491 log.go:181] (0xc000b58e70) (0xc000b2a5a0) Stream removed, broadcasting: 5\n" Sep 15 10:59:56.232: INFO: stdout: "\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq\naffinity-clusterip-timeout-h5tnq" Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Received response from host: affinity-clusterip-timeout-h5tnq Sep 15 10:59:56.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-398 execpod-affinityszrzf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.51.88:80/' Sep 15 10:59:56.463: INFO: stderr: "I0915 10:59:56.372535 1509 log.go:181] (0xc000850f20) (0xc0004e0e60) Create stream\nI0915 10:59:56.372599 1509 log.go:181] (0xc000850f20) (0xc0004e0e60) Stream added, broadcasting: 1\nI0915 10:59:56.378818 1509 log.go:181] (0xc000850f20) Reply frame received for 1\nI0915 10:59:56.378862 1509 log.go:181] (0xc000850f20) (0xc000b840a0) Create stream\nI0915 10:59:56.378873 1509 log.go:181] (0xc000850f20) (0xc000b840a0) Stream added, broadcasting: 3\nI0915 10:59:56.379882 1509 log.go:181] (0xc000850f20) Reply frame received for 3\nI0915 10:59:56.379929 1509 log.go:181] (0xc000850f20) (0xc000616aa0) Create stream\nI0915 10:59:56.379950 1509 log.go:181] (0xc000850f20) (0xc000616aa0) Stream added, broadcasting: 5\nI0915 10:59:56.380823 1509 log.go:181] (0xc000850f20) Reply frame received for 5\nI0915 10:59:56.450620 1509 log.go:181] (0xc000850f20) Data frame received for 5\nI0915 10:59:56.450654 1509 log.go:181] (0xc000616aa0) (5) Data frame handling\nI0915 10:59:56.450674 1509 log.go:181] (0xc000616aa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 10:59:56.456000 1509 log.go:181] (0xc000850f20) Data frame received for 3\nI0915 10:59:56.456015 1509 log.go:181] (0xc000b840a0) (3) Data frame handling\nI0915 10:59:56.456026 1509 log.go:181] (0xc000b840a0) (3) Data frame sent\nI0915 10:59:56.456882 1509 log.go:181] (0xc000850f20) Data frame received for 3\nI0915 10:59:56.456929 1509 log.go:181] (0xc000b840a0) (3) Data frame handling\nI0915 10:59:56.457070 1509 log.go:181] (0xc000850f20) Data frame received for 5\nI0915 10:59:56.457105 1509 log.go:181] (0xc000616aa0) (5) Data frame handling\nI0915 10:59:56.458732 1509 log.go:181] (0xc000850f20) Data frame received for 1\nI0915 10:59:56.458744 1509 log.go:181] (0xc0004e0e60) (1) Data frame handling\nI0915 10:59:56.458754 1509 log.go:181] (0xc0004e0e60) (1) Data frame sent\nI0915 10:59:56.458827 1509 log.go:181] (0xc000850f20) (0xc0004e0e60) Stream removed, broadcasting: 1\nI0915 10:59:56.458953 1509 log.go:181] (0xc000850f20) Go away received\nI0915 10:59:56.459099 1509 log.go:181] (0xc000850f20) (0xc0004e0e60) Stream removed, broadcasting: 1\nI0915 10:59:56.459110 1509 log.go:181] (0xc000850f20) (0xc000b840a0) Stream removed, broadcasting: 3\nI0915 10:59:56.459116 1509 log.go:181] (0xc000850f20) (0xc000616aa0) Stream removed, broadcasting: 5\n" Sep 15 10:59:56.463: INFO: stdout: "affinity-clusterip-timeout-h5tnq" Sep 15 11:00:11.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-398 execpod-affinityszrzf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.51.88:80/' Sep 15 11:00:11.715: INFO: stderr: "I0915 11:00:11.596315 1527 log.go:181] (0xc000749600) (0xc0006ec8c0) Create stream\nI0915 11:00:11.596369 1527 log.go:181] (0xc000749600) (0xc0006ec8c0) Stream added, broadcasting: 1\nI0915 11:00:11.600708 1527 log.go:181] (0xc000749600) Reply frame received for 1\nI0915 11:00:11.600844 1527 log.go:181] (0xc000749600) (0xc0006ec000) Create stream\nI0915 11:00:11.600881 1527 log.go:181] (0xc000749600) (0xc0006ec000) Stream added, broadcasting: 3\nI0915 11:00:11.601821 1527 log.go:181] (0xc000749600) Reply frame received for 3\nI0915 11:00:11.601875 1527 log.go:181] (0xc000749600) (0xc000873ea0) Create stream\nI0915 11:00:11.601897 1527 log.go:181] (0xc000749600) (0xc000873ea0) Stream added, broadcasting: 5\nI0915 11:00:11.602873 1527 log.go:181] (0xc000749600) Reply frame received for 5\nI0915 11:00:11.703606 1527 log.go:181] (0xc000749600) Data frame received for 5\nI0915 11:00:11.703661 1527 log.go:181] (0xc000873ea0) (5) Data frame handling\nI0915 11:00:11.703702 1527 log.go:181] (0xc000873ea0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 11:00:11.707479 1527 log.go:181] (0xc000749600) Data frame received for 3\nI0915 11:00:11.707504 1527 log.go:181] (0xc0006ec000) (3) Data frame handling\nI0915 11:00:11.707513 1527 log.go:181] (0xc0006ec000) (3) Data frame sent\nI0915 11:00:11.708498 1527 log.go:181] (0xc000749600) Data frame received for 3\nI0915 11:00:11.708544 1527 log.go:181] (0xc0006ec000) (3) Data frame handling\nI0915 11:00:11.708580 1527 log.go:181] (0xc000749600) Data frame received for 5\nI0915 11:00:11.708605 1527 log.go:181] (0xc000873ea0) (5) Data frame handling\nI0915 11:00:11.710144 1527 log.go:181] (0xc000749600) Data frame received for 1\nI0915 11:00:11.710174 1527 log.go:181] (0xc0006ec8c0) (1) Data frame handling\nI0915 11:00:11.710203 1527 log.go:181] (0xc0006ec8c0) (1) Data frame sent\nI0915 11:00:11.710222 1527 log.go:181] (0xc000749600) (0xc0006ec8c0) Stream removed, broadcasting: 1\nI0915 11:00:11.710240 1527 log.go:181] (0xc000749600) Go away received\nI0915 11:00:11.710848 1527 log.go:181] (0xc000749600) (0xc0006ec8c0) Stream removed, broadcasting: 1\nI0915 11:00:11.710873 1527 log.go:181] (0xc000749600) (0xc0006ec000) Stream removed, broadcasting: 3\nI0915 11:00:11.710885 1527 log.go:181] (0xc000749600) (0xc000873ea0) Stream removed, broadcasting: 5\n" Sep 15 11:00:11.715: INFO: stdout: "affinity-clusterip-timeout-h5tnq" Sep 15 11:00:26.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-398 execpod-affinityszrzf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.51.88:80/' Sep 15 11:00:26.940: INFO: stderr: "I0915 11:00:26.859357 1546 log.go:181] (0xc00029e4d0) (0xc000848fa0) Create stream\nI0915 11:00:26.859407 1546 log.go:181] (0xc00029e4d0) (0xc000848fa0) Stream added, broadcasting: 1\nI0915 11:00:26.861858 1546 log.go:181] (0xc00029e4d0) Reply frame received for 1\nI0915 11:00:26.861904 1546 log.go:181] (0xc00029e4d0) (0xc0001fa140) Create stream\nI0915 11:00:26.861919 1546 log.go:181] (0xc00029e4d0) (0xc0001fa140) Stream added, broadcasting: 3\nI0915 11:00:26.862713 1546 log.go:181] (0xc00029e4d0) Reply frame received for 3\nI0915 11:00:26.862751 1546 log.go:181] (0xc00029e4d0) (0xc000e0e8c0) Create stream\nI0915 11:00:26.862767 1546 log.go:181] (0xc00029e4d0) (0xc000e0e8c0) Stream added, broadcasting: 5\nI0915 11:00:26.863563 1546 log.go:181] (0xc00029e4d0) Reply frame received for 5\nI0915 11:00:26.929769 1546 log.go:181] (0xc00029e4d0) Data frame received for 5\nI0915 11:00:26.929797 1546 log.go:181] (0xc000e0e8c0) (5) Data frame handling\nI0915 11:00:26.929818 1546 log.go:181] (0xc000e0e8c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.51.88:80/\nI0915 11:00:26.931677 1546 log.go:181] (0xc00029e4d0) Data frame received for 3\nI0915 11:00:26.931711 1546 log.go:181] (0xc0001fa140) (3) Data frame handling\nI0915 11:00:26.931738 1546 log.go:181] (0xc0001fa140) (3) Data frame sent\nI0915 11:00:26.932294 1546 log.go:181] (0xc00029e4d0) Data frame received for 5\nI0915 11:00:26.932331 1546 log.go:181] (0xc000e0e8c0) (5) Data frame handling\nI0915 11:00:26.932358 1546 log.go:181] (0xc00029e4d0) Data frame received for 3\nI0915 11:00:26.932380 1546 log.go:181] (0xc0001fa140) (3) Data frame handling\nI0915 11:00:26.934228 1546 log.go:181] (0xc00029e4d0) Data frame received for 1\nI0915 11:00:26.934248 1546 log.go:181] (0xc000848fa0) (1) Data frame handling\nI0915 11:00:26.934257 1546 log.go:181] (0xc000848fa0) (1) Data frame sent\nI0915 11:00:26.934270 1546 log.go:181] (0xc00029e4d0) (0xc000848fa0) Stream removed, broadcasting: 1\nI0915 11:00:26.934329 1546 log.go:181] (0xc00029e4d0) Go away received\nI0915 11:00:26.934887 1546 log.go:181] (0xc00029e4d0) (0xc000848fa0) Stream removed, broadcasting: 1\nI0915 11:00:26.934923 1546 log.go:181] (0xc00029e4d0) (0xc0001fa140) Stream removed, broadcasting: 3\nI0915 11:00:26.934941 1546 log.go:181] (0xc00029e4d0) (0xc000e0e8c0) Stream removed, broadcasting: 5\n" Sep 15 11:00:26.940: INFO: stdout: "affinity-clusterip-timeout-m2j7c" Sep 15 11:00:26.940: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-398, will wait for the garbage collector to delete the pods Sep 15 11:00:27.022: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.680774ms Sep 15 11:00:27.623: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.180106ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:00:43.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-398" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:73.354 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":102,"skipped":1708,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:00:43.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-879.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-879.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-879.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-879.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-879.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-879.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 15 11:00:59.461: INFO: DNS probes using dns-879/dns-test-1503c1cd-054e-4d52-b1fe-b90d1e590141 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:00:59.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-879" for this suite. • [SLOW TEST:16.306 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":103,"skipped":1720,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:00:59.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:01:00.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4480" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":104,"skipped":1749,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:01:00.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 15 11:01:00.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-350' Sep 15 11:01:00.434: INFO: stderr: "" Sep 15 11:01:00.434: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Sep 15 11:01:05.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-350 -o json' Sep 15 11:01:05.585: INFO: stderr: "" Sep 15 11:01:05.585: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-15T11:01:00Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-15T11:01:00Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.104\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-15T11:01:03Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-350\",\n \"resourceVersion\": \"441457\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-350/pods/e2e-test-httpd-pod\",\n \"uid\": \"ec213cc9-2e73-49f5-8387-bae37b5c77d9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-fnkn6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-fnkn6\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-fnkn6\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-15T11:01:00Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-15T11:01:03Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-15T11:01:03Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-15T11:01:00Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://655f07584fd7314ffe59144df99aab0332549b22fb8a4e28335f01728f19cf75\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-09-15T11:01:03Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.104\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.104\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-15T11:01:00Z\"\n }\n}\n" STEP: replace the image in the pod Sep 15 11:01:05.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-350' Sep 15 11:01:05.917: INFO: stderr: "" Sep 15 11:01:05.917: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Sep 15 11:01:05.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-350' Sep 15 11:01:13.213: INFO: stderr: "" Sep 15 11:01:13.213: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:01:13.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-350" for this suite. • [SLOW TEST:12.966 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":105,"skipped":1771,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:01:13.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 15 11:01:13.340: INFO: Waiting up to 5m0s for pod "pod-c1d83f87-863a-4ad9-9c57-f9d8c2a13b96" in namespace "emptydir-7594" to be "Succeeded or Failed" Sep 15 11:01:13.346: INFO: Pod "pod-c1d83f87-863a-4ad9-9c57-f9d8c2a13b96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.000921ms Sep 15 11:01:15.351: INFO: Pod "pod-c1d83f87-863a-4ad9-9c57-f9d8c2a13b96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01118963s Sep 15 11:01:17.356: INFO: Pod "pod-c1d83f87-863a-4ad9-9c57-f9d8c2a13b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01590171s STEP: Saw pod success Sep 15 11:01:17.356: INFO: Pod "pod-c1d83f87-863a-4ad9-9c57-f9d8c2a13b96" satisfied condition "Succeeded or Failed" Sep 15 11:01:17.358: INFO: Trying to get logs from node kali-worker2 pod pod-c1d83f87-863a-4ad9-9c57-f9d8c2a13b96 container test-container: STEP: delete the pod Sep 15 11:01:17.412: INFO: Waiting for pod pod-c1d83f87-863a-4ad9-9c57-f9d8c2a13b96 to disappear Sep 15 11:01:17.419: INFO: Pod pod-c1d83f87-863a-4ad9-9c57-f9d8c2a13b96 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:01:17.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7594" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1787,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:01:17.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:01:18.196: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:01:20.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764478, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764478, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764478, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764478, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:01:23.502: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:01:23.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:01:24.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6056" for this suite. STEP: Destroying namespace "webhook-6056-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.402 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":107,"skipped":1805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:01:24.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-bd4004a9-2a33-42cd-85d5-935d3dae7455 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:01:24.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4048" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":108,"skipped":1831,"failed":0} SSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:01:24.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:01:25.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7514" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":109,"skipped":1839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:01:25.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Sep 15 11:01:25.270: INFO: Waiting up to 5m0s for pod "var-expansion-06f12e7d-1da5-426a-ab32-fc6c6157104d" in namespace "var-expansion-171" to be "Succeeded or Failed" Sep 15 11:01:25.274: INFO: Pod "var-expansion-06f12e7d-1da5-426a-ab32-fc6c6157104d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.865908ms Sep 15 11:01:27.311: INFO: Pod "var-expansion-06f12e7d-1da5-426a-ab32-fc6c6157104d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040718468s Sep 15 11:01:29.316: INFO: Pod "var-expansion-06f12e7d-1da5-426a-ab32-fc6c6157104d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045373835s STEP: Saw pod success Sep 15 11:01:29.316: INFO: Pod "var-expansion-06f12e7d-1da5-426a-ab32-fc6c6157104d" satisfied condition "Succeeded or Failed" Sep 15 11:01:29.319: INFO: Trying to get logs from node kali-worker pod var-expansion-06f12e7d-1da5-426a-ab32-fc6c6157104d container dapi-container: STEP: delete the pod Sep 15 11:01:29.361: INFO: Waiting for pod var-expansion-06f12e7d-1da5-426a-ab32-fc6c6157104d to disappear Sep 15 11:01:29.370: INFO: Pod var-expansion-06f12e7d-1da5-426a-ab32-fc6c6157104d no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:01:29.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-171" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":110,"skipped":1876,"failed":0} SSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:01:29.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 15 11:01:29.462: INFO: starting watch STEP: patching STEP: updating Sep 15 11:01:29.487: INFO: waiting for watch events with expected annotations Sep 15 11:01:29.487: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:01:29.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-684" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":111,"skipped":1882,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:01:29.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:01:29.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-829" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":112,"skipped":1885,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:01:29.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-359 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-359 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-359 Sep 15 11:01:29.833: INFO: Found 0 stateful pods, waiting for 1 Sep 15 11:01:39.839: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Sep 15 11:01:39.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-359 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 11:01:40.130: INFO: stderr: "I0915 11:01:39.976777 1636 log.go:181] (0xc0007af3f0) (0xc0006c4960) Create stream\nI0915 11:01:39.976838 1636 log.go:181] (0xc0007af3f0) (0xc0006c4960) Stream added, broadcasting: 1\nI0915 11:01:39.982323 1636 log.go:181] (0xc0007af3f0) Reply frame received for 1\nI0915 11:01:39.982366 1636 log.go:181] (0xc0007af3f0) (0xc0000b23c0) Create stream\nI0915 11:01:39.982379 1636 log.go:181] (0xc0007af3f0) (0xc0000b23c0) Stream added, broadcasting: 3\nI0915 11:01:39.983399 1636 log.go:181] (0xc0007af3f0) Reply frame received for 3\nI0915 11:01:39.983448 1636 log.go:181] (0xc0007af3f0) (0xc0006c4000) Create stream\nI0915 11:01:39.983463 1636 log.go:181] (0xc0007af3f0) (0xc0006c4000) Stream added, broadcasting: 5\nI0915 11:01:39.986379 1636 log.go:181] (0xc0007af3f0) Reply frame received for 5\nI0915 11:01:40.077218 1636 log.go:181] (0xc0007af3f0) Data frame received for 5\nI0915 11:01:40.077242 1636 log.go:181] (0xc0006c4000) (5) Data frame handling\nI0915 11:01:40.077256 1636 log.go:181] (0xc0006c4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 11:01:40.123716 1636 log.go:181] (0xc0007af3f0) Data frame received for 3\nI0915 11:01:40.123745 1636 log.go:181] (0xc0000b23c0) (3) Data frame handling\nI0915 11:01:40.123757 1636 log.go:181] (0xc0000b23c0) (3) Data frame sent\nI0915 11:01:40.123763 1636 log.go:181] (0xc0007af3f0) Data frame received for 3\nI0915 11:01:40.123769 1636 log.go:181] (0xc0000b23c0) (3) Data frame handling\nI0915 11:01:40.123888 1636 log.go:181] (0xc0007af3f0) Data frame received for 5\nI0915 11:01:40.123927 1636 log.go:181] (0xc0006c4000) (5) Data frame handling\nI0915 11:01:40.125551 1636 log.go:181] (0xc0007af3f0) Data frame received for 1\nI0915 11:01:40.125581 1636 log.go:181] (0xc0006c4960) (1) Data frame handling\nI0915 11:01:40.125596 1636 log.go:181] (0xc0006c4960) (1) Data frame sent\nI0915 11:01:40.125622 1636 log.go:181] (0xc0007af3f0) (0xc0006c4960) Stream removed, broadcasting: 1\nI0915 11:01:40.125648 1636 log.go:181] (0xc0007af3f0) Go away received\nI0915 11:01:40.125984 1636 log.go:181] (0xc0007af3f0) (0xc0006c4960) Stream removed, broadcasting: 1\nI0915 11:01:40.125999 1636 log.go:181] (0xc0007af3f0) (0xc0000b23c0) Stream removed, broadcasting: 3\nI0915 11:01:40.126007 1636 log.go:181] (0xc0007af3f0) (0xc0006c4000) Stream removed, broadcasting: 5\n" Sep 15 11:01:40.130: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 11:01:40.130: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 15 11:01:40.153: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 15 11:01:50.210: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 15 11:01:50.210: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 11:01:50.233: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999496s Sep 15 11:01:51.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996577349s Sep 15 11:01:52.243: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991677629s Sep 15 11:01:53.247: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987160878s Sep 15 11:01:54.252: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.98239396s Sep 15 11:01:55.297: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.977344757s Sep 15 11:01:56.307: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.932547432s Sep 15 11:01:57.312: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.922380844s Sep 15 11:01:58.317: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.917779017s Sep 15 11:01:59.323: INFO: Verifying statefulset ss doesn't scale past 1 for another 912.429676ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-359 Sep 15 11:02:00.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-359 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 15 11:02:00.542: INFO: stderr: "I0915 11:02:00.465776 1654 log.go:181] (0xc000b58e70) (0xc000129f40) Create stream\nI0915 11:02:00.465827 1654 log.go:181] (0xc000b58e70) (0xc000129f40) Stream added, broadcasting: 1\nI0915 11:02:00.470331 1654 log.go:181] (0xc000b58e70) Reply frame received for 1\nI0915 11:02:00.470382 1654 log.go:181] (0xc000b58e70) (0xc000128000) Create stream\nI0915 11:02:00.470429 1654 log.go:181] (0xc000b58e70) (0xc000128000) Stream added, broadcasting: 3\nI0915 11:02:00.471401 1654 log.go:181] (0xc000b58e70) Reply frame received for 3\nI0915 11:02:00.471442 1654 log.go:181] (0xc000b58e70) (0xc000436aa0) Create stream\nI0915 11:02:00.471454 1654 log.go:181] (0xc000b58e70) (0xc000436aa0) Stream added, broadcasting: 5\nI0915 11:02:00.472538 1654 log.go:181] (0xc000b58e70) Reply frame received for 5\nI0915 11:02:00.536709 1654 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 11:02:00.536745 1654 log.go:181] (0xc000128000) (3) Data frame handling\nI0915 11:02:00.536757 1654 log.go:181] (0xc000128000) (3) Data frame sent\nI0915 11:02:00.536765 1654 log.go:181] (0xc000b58e70) Data frame received for 3\nI0915 11:02:00.536773 1654 log.go:181] (0xc000128000) (3) Data frame handling\nI0915 11:02:00.536801 1654 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 11:02:00.536809 1654 log.go:181] (0xc000436aa0) (5) Data frame handling\nI0915 11:02:00.536824 1654 log.go:181] (0xc000436aa0) (5) Data frame sent\nI0915 11:02:00.536834 1654 log.go:181] (0xc000b58e70) Data frame received for 5\nI0915 11:02:00.536842 1654 log.go:181] (0xc000436aa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0915 11:02:00.538381 1654 log.go:181] (0xc000b58e70) Data frame received for 1\nI0915 11:02:00.538420 1654 log.go:181] (0xc000129f40) (1) Data frame handling\nI0915 11:02:00.538446 1654 log.go:181] (0xc000129f40) (1) Data frame sent\nI0915 11:02:00.538476 1654 log.go:181] (0xc000b58e70) (0xc000129f40) Stream removed, broadcasting: 1\nI0915 11:02:00.538520 1654 log.go:181] (0xc000b58e70) Go away received\nI0915 11:02:00.538846 1654 log.go:181] (0xc000b58e70) (0xc000129f40) Stream removed, broadcasting: 1\nI0915 11:02:00.538861 1654 log.go:181] (0xc000b58e70) (0xc000128000) Stream removed, broadcasting: 3\nI0915 11:02:00.538867 1654 log.go:181] (0xc000b58e70) (0xc000436aa0) Stream removed, broadcasting: 5\n" Sep 15 11:02:00.542: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 15 11:02:00.542: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 15 11:02:00.546: INFO: Found 1 stateful pods, waiting for 3 Sep 15 11:02:10.551: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:02:10.551: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:02:10.551: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Sep 15 11:02:10.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-359 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 11:02:10.759: INFO: stderr: "I0915 11:02:10.688330 1672 log.go:181] (0xc00013ee70) (0xc0004c2460) Create stream\nI0915 11:02:10.688385 1672 log.go:181] (0xc00013ee70) (0xc0004c2460) Stream added, broadcasting: 1\nI0915 11:02:10.693259 1672 log.go:181] (0xc00013ee70) Reply frame received for 1\nI0915 11:02:10.693303 1672 log.go:181] (0xc00013ee70) (0xc000926460) Create stream\nI0915 11:02:10.693317 1672 log.go:181] (0xc00013ee70) (0xc000926460) Stream added, broadcasting: 3\nI0915 11:02:10.694107 1672 log.go:181] (0xc00013ee70) Reply frame received for 3\nI0915 11:02:10.694136 1672 log.go:181] (0xc00013ee70) (0xc000014140) Create stream\nI0915 11:02:10.694147 1672 log.go:181] (0xc00013ee70) (0xc000014140) Stream added, broadcasting: 5\nI0915 11:02:10.694923 1672 log.go:181] (0xc00013ee70) Reply frame received for 5\nI0915 11:02:10.751658 1672 log.go:181] (0xc00013ee70) Data frame received for 5\nI0915 11:02:10.751694 1672 log.go:181] (0xc000014140) (5) Data frame handling\nI0915 11:02:10.751708 1672 log.go:181] (0xc000014140) (5) Data frame sent\nI0915 11:02:10.751717 1672 log.go:181] (0xc00013ee70) Data frame received for 5\nI0915 11:02:10.751725 1672 log.go:181] (0xc000014140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 11:02:10.751746 1672 log.go:181] (0xc00013ee70) Data frame received for 3\nI0915 11:02:10.751755 1672 log.go:181] (0xc000926460) (3) Data frame handling\nI0915 11:02:10.751766 1672 log.go:181] (0xc000926460) (3) Data frame sent\nI0915 11:02:10.751821 1672 log.go:181] (0xc00013ee70) Data frame received for 3\nI0915 11:02:10.751856 1672 log.go:181] (0xc000926460) (3) Data frame handling\nI0915 11:02:10.753640 1672 log.go:181] (0xc00013ee70) Data frame received for 1\nI0915 11:02:10.753675 1672 log.go:181] (0xc0004c2460) (1) Data frame handling\nI0915 11:02:10.753697 1672 log.go:181] (0xc0004c2460) (1) Data frame sent\nI0915 11:02:10.753722 1672 log.go:181] (0xc00013ee70) (0xc0004c2460) Stream removed, broadcasting: 1\nI0915 11:02:10.753753 1672 log.go:181] (0xc00013ee70) Go away received\nI0915 11:02:10.754206 1672 log.go:181] (0xc00013ee70) (0xc0004c2460) Stream removed, broadcasting: 1\nI0915 11:02:10.754230 1672 log.go:181] (0xc00013ee70) (0xc000926460) Stream removed, broadcasting: 3\nI0915 11:02:10.754243 1672 log.go:181] (0xc00013ee70) (0xc000014140) Stream removed, broadcasting: 5\n" Sep 15 11:02:10.759: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 11:02:10.759: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 15 11:02:10.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-359 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 11:02:10.991: INFO: stderr: "I0915 11:02:10.895092 1690 log.go:181] (0xc00030d970) (0xc000d08a00) Create stream\nI0915 11:02:10.895139 1690 log.go:181] (0xc00030d970) (0xc000d08a00) Stream added, broadcasting: 1\nI0915 11:02:10.900603 1690 log.go:181] (0xc00030d970) Reply frame received for 1\nI0915 11:02:10.900652 1690 log.go:181] (0xc00030d970) (0xc000d08000) Create stream\nI0915 11:02:10.900662 1690 log.go:181] (0xc00030d970) (0xc000d08000) Stream added, broadcasting: 3\nI0915 11:02:10.901564 1690 log.go:181] (0xc00030d970) Reply frame received for 3\nI0915 11:02:10.901598 1690 log.go:181] (0xc00030d970) (0xc000528000) Create stream\nI0915 11:02:10.901610 1690 log.go:181] (0xc00030d970) (0xc000528000) Stream added, broadcasting: 5\nI0915 11:02:10.902396 1690 log.go:181] (0xc00030d970) Reply frame received for 5\nI0915 11:02:10.952577 1690 log.go:181] (0xc00030d970) Data frame received for 5\nI0915 11:02:10.952612 1690 log.go:181] (0xc000528000) (5) Data frame handling\nI0915 11:02:10.952644 1690 log.go:181] (0xc000528000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 11:02:10.983040 1690 log.go:181] (0xc00030d970) Data frame received for 5\nI0915 11:02:10.983079 1690 log.go:181] (0xc00030d970) Data frame received for 3\nI0915 11:02:10.983129 1690 log.go:181] (0xc000d08000) (3) Data frame handling\nI0915 11:02:10.983158 1690 log.go:181] (0xc000d08000) (3) Data frame sent\nI0915 11:02:10.983179 1690 log.go:181] (0xc00030d970) Data frame received for 3\nI0915 11:02:10.983198 1690 log.go:181] (0xc000d08000) (3) Data frame handling\nI0915 11:02:10.983247 1690 log.go:181] (0xc000528000) (5) Data frame handling\nI0915 11:02:10.985379 1690 log.go:181] (0xc00030d970) Data frame received for 1\nI0915 11:02:10.985412 1690 log.go:181] (0xc000d08a00) (1) Data frame handling\nI0915 11:02:10.985427 1690 log.go:181] (0xc000d08a00) (1) Data frame sent\nI0915 11:02:10.985515 1690 log.go:181] (0xc00030d970) (0xc000d08a00) Stream removed, broadcasting: 1\nI0915 11:02:10.985685 1690 log.go:181] (0xc00030d970) Go away received\nI0915 11:02:10.986058 1690 log.go:181] (0xc00030d970) (0xc000d08a00) Stream removed, broadcasting: 1\nI0915 11:02:10.986082 1690 log.go:181] (0xc00030d970) (0xc000d08000) Stream removed, broadcasting: 3\nI0915 11:02:10.986094 1690 log.go:181] (0xc00030d970) (0xc000528000) Stream removed, broadcasting: 5\n" Sep 15 11:02:10.991: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 11:02:10.991: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 15 11:02:10.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-359 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 11:02:11.342: INFO: stderr: "I0915 11:02:11.217798 1708 log.go:181] (0xc00091b600) (0xc00054a960) Create stream\nI0915 11:02:11.217859 1708 log.go:181] (0xc00091b600) (0xc00054a960) Stream added, broadcasting: 1\nI0915 11:02:11.222662 1708 log.go:181] (0xc00091b600) Reply frame received for 1\nI0915 11:02:11.222730 1708 log.go:181] (0xc00091b600) (0xc0007b4000) Create stream\nI0915 11:02:11.222749 1708 log.go:181] (0xc00091b600) (0xc0007b4000) Stream added, broadcasting: 3\nI0915 11:02:11.223617 1708 log.go:181] (0xc00091b600) Reply frame received for 3\nI0915 11:02:11.223667 1708 log.go:181] (0xc00091b600) (0xc00054a000) Create stream\nI0915 11:02:11.223682 1708 log.go:181] (0xc00091b600) (0xc00054a000) Stream added, broadcasting: 5\nI0915 11:02:11.224590 1708 log.go:181] (0xc00091b600) Reply frame received for 5\nI0915 11:02:11.292889 1708 log.go:181] (0xc00091b600) Data frame received for 5\nI0915 11:02:11.292911 1708 log.go:181] (0xc00054a000) (5) Data frame handling\nI0915 11:02:11.292925 1708 log.go:181] (0xc00054a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 11:02:11.334782 1708 log.go:181] (0xc00091b600) Data frame received for 3\nI0915 11:02:11.334915 1708 log.go:181] (0xc0007b4000) (3) Data frame handling\nI0915 11:02:11.335025 1708 log.go:181] (0xc00091b600) Data frame received for 5\nI0915 11:02:11.335065 1708 log.go:181] (0xc00054a000) (5) Data frame handling\nI0915 11:02:11.335108 1708 log.go:181] (0xc0007b4000) (3) Data frame sent\nI0915 11:02:11.335136 1708 log.go:181] (0xc00091b600) Data frame received for 3\nI0915 11:02:11.335156 1708 log.go:181] (0xc0007b4000) (3) Data frame handling\nI0915 11:02:11.337090 1708 log.go:181] (0xc00091b600) Data frame received for 1\nI0915 11:02:11.337113 1708 log.go:181] (0xc00054a960) (1) Data frame handling\nI0915 11:02:11.337132 1708 log.go:181] (0xc00054a960) (1) Data frame sent\nI0915 11:02:11.337150 1708 log.go:181] (0xc00091b600) (0xc00054a960) Stream removed, broadcasting: 1\nI0915 11:02:11.337172 1708 log.go:181] (0xc00091b600) Go away received\nI0915 11:02:11.337606 1708 log.go:181] (0xc00091b600) (0xc00054a960) Stream removed, broadcasting: 1\nI0915 11:02:11.337632 1708 log.go:181] (0xc00091b600) (0xc0007b4000) Stream removed, broadcasting: 3\nI0915 11:02:11.337645 1708 log.go:181] (0xc00091b600) (0xc00054a000) Stream removed, broadcasting: 5\n" Sep 15 11:02:11.343: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 11:02:11.343: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 15 11:02:11.343: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 11:02:11.346: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Sep 15 11:02:21.353: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 15 11:02:21.353: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 15 11:02:21.353: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 15 11:02:21.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999335s Sep 15 11:02:22.373: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991226828s Sep 15 11:02:23.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986269195s Sep 15 11:02:24.409: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.979918138s Sep 15 11:02:25.414: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.950485113s Sep 15 11:02:26.419: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.945360091s Sep 15 11:02:27.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.94003654s Sep 15 11:02:28.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.934687001s Sep 15 11:02:29.436: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.929153893s Sep 15 11:02:30.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 923.540484ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-359 Sep 15 11:02:31.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-359 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 15 11:02:31.693: INFO: stderr: "I0915 11:02:31.599863 1726 log.go:181] (0xc0009d2000) (0xc000a06320) Create stream\nI0915 11:02:31.599934 1726 log.go:181] (0xc0009d2000) (0xc000a06320) Stream added, broadcasting: 1\nI0915 11:02:31.602275 1726 log.go:181] (0xc0009d2000) Reply frame received for 1\nI0915 11:02:31.602351 1726 log.go:181] (0xc0009d2000) (0xc000c180a0) Create stream\nI0915 11:02:31.602384 1726 log.go:181] (0xc0009d2000) (0xc000c180a0) Stream added, broadcasting: 3\nI0915 11:02:31.603597 1726 log.go:181] (0xc0009d2000) Reply frame received for 3\nI0915 11:02:31.603660 1726 log.go:181] (0xc0009d2000) (0xc000a074a0) Create stream\nI0915 11:02:31.603689 1726 log.go:181] (0xc0009d2000) (0xc000a074a0) Stream added, broadcasting: 5\nI0915 11:02:31.604724 1726 log.go:181] (0xc0009d2000) Reply frame received for 5\nI0915 11:02:31.686891 1726 log.go:181] (0xc0009d2000) Data frame received for 5\nI0915 11:02:31.686921 1726 log.go:181] (0xc000a074a0) (5) Data frame handling\nI0915 11:02:31.686935 1726 log.go:181] (0xc000a074a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0915 11:02:31.686970 1726 log.go:181] (0xc0009d2000) Data frame received for 5\nI0915 11:02:31.686980 1726 log.go:181] (0xc000a074a0) (5) Data frame handling\nI0915 11:02:31.686999 1726 log.go:181] (0xc0009d2000) Data frame received for 3\nI0915 11:02:31.687008 1726 log.go:181] (0xc000c180a0) (3) Data frame handling\nI0915 11:02:31.687019 1726 log.go:181] (0xc000c180a0) (3) Data frame sent\nI0915 11:02:31.687029 1726 log.go:181] (0xc0009d2000) Data frame received for 3\nI0915 11:02:31.687044 1726 log.go:181] (0xc000c180a0) (3) Data frame handling\nI0915 11:02:31.688674 1726 log.go:181] (0xc0009d2000) Data frame received for 1\nI0915 11:02:31.688708 1726 log.go:181] (0xc000a06320) (1) Data frame handling\nI0915 11:02:31.688733 1726 log.go:181] (0xc000a06320) (1) Data frame sent\nI0915 11:02:31.688749 1726 log.go:181] (0xc0009d2000) (0xc000a06320) Stream removed, broadcasting: 1\nI0915 11:02:31.688767 1726 log.go:181] (0xc0009d2000) Go away received\nI0915 11:02:31.689312 1726 log.go:181] (0xc0009d2000) (0xc000a06320) Stream removed, broadcasting: 1\nI0915 11:02:31.689336 1726 log.go:181] (0xc0009d2000) (0xc000c180a0) Stream removed, broadcasting: 3\nI0915 11:02:31.689349 1726 log.go:181] (0xc0009d2000) (0xc000a074a0) Stream removed, broadcasting: 5\n" Sep 15 11:02:31.693: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 15 11:02:31.693: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 15 11:02:31.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-359 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 15 11:02:31.903: INFO: stderr: "I0915 11:02:31.834279 1745 log.go:181] (0xc0005c6000) (0xc0007361e0) Create stream\nI0915 11:02:31.834354 1745 log.go:181] (0xc0005c6000) (0xc0007361e0) Stream added, broadcasting: 1\nI0915 11:02:31.836223 1745 log.go:181] (0xc0005c6000) Reply frame received for 1\nI0915 11:02:31.836268 1745 log.go:181] (0xc0005c6000) (0xc0007141e0) Create stream\nI0915 11:02:31.836280 1745 log.go:181] (0xc0005c6000) (0xc0007141e0) Stream added, broadcasting: 3\nI0915 11:02:31.837564 1745 log.go:181] (0xc0005c6000) Reply frame received for 3\nI0915 11:02:31.837609 1745 log.go:181] (0xc0005c6000) (0xc000736280) Create stream\nI0915 11:02:31.837628 1745 log.go:181] (0xc0005c6000) (0xc000736280) Stream added, broadcasting: 5\nI0915 11:02:31.838644 1745 log.go:181] (0xc0005c6000) Reply frame received for 5\nI0915 11:02:31.896718 1745 log.go:181] (0xc0005c6000) Data frame received for 5\nI0915 11:02:31.896742 1745 log.go:181] (0xc000736280) (5) Data frame handling\nI0915 11:02:31.896750 1745 log.go:181] (0xc000736280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0915 11:02:31.896772 1745 log.go:181] (0xc0005c6000) Data frame received for 3\nI0915 11:02:31.896805 1745 log.go:181] (0xc0007141e0) (3) Data frame handling\nI0915 11:02:31.896815 1745 log.go:181] (0xc0007141e0) (3) Data frame sent\nI0915 11:02:31.896823 1745 log.go:181] (0xc0005c6000) Data frame received for 3\nI0915 11:02:31.896829 1745 log.go:181] (0xc0007141e0) (3) Data frame handling\nI0915 11:02:31.896842 1745 log.go:181] (0xc0005c6000) Data frame received for 5\nI0915 11:02:31.896856 1745 log.go:181] (0xc000736280) (5) Data frame handling\nI0915 11:02:31.898983 1745 log.go:181] (0xc0005c6000) Data frame received for 1\nI0915 11:02:31.899026 1745 log.go:181] (0xc0007361e0) (1) Data frame handling\nI0915 11:02:31.899066 1745 log.go:181] (0xc0007361e0) (1) Data frame sent\nI0915 11:02:31.899105 1745 log.go:181] (0xc0005c6000) (0xc0007361e0) Stream removed, broadcasting: 1\nI0915 11:02:31.899138 1745 log.go:181] (0xc0005c6000) Go away received\nI0915 11:02:31.899558 1745 log.go:181] (0xc0005c6000) (0xc0007361e0) Stream removed, broadcasting: 1\nI0915 11:02:31.899584 1745 log.go:181] (0xc0005c6000) (0xc0007141e0) Stream removed, broadcasting: 3\nI0915 11:02:31.899597 1745 log.go:181] (0xc0005c6000) (0xc000736280) Stream removed, broadcasting: 5\n" Sep 15 11:02:31.903: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 15 11:02:31.903: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 15 11:02:31.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-359 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 15 11:02:32.111: INFO: stderr: "I0915 11:02:32.028598 1763 log.go:181] (0xc000a300b0) (0xc0008d41e0) Create stream\nI0915 11:02:32.028671 1763 log.go:181] (0xc000a300b0) (0xc0008d41e0) Stream added, broadcasting: 1\nI0915 11:02:32.030183 1763 log.go:181] (0xc000a300b0) Reply frame received for 1\nI0915 11:02:32.030216 1763 log.go:181] (0xc000a300b0) (0xc0005f0000) Create stream\nI0915 11:02:32.030228 1763 log.go:181] (0xc000a300b0) (0xc0005f0000) Stream added, broadcasting: 3\nI0915 11:02:32.031065 1763 log.go:181] (0xc000a300b0) Reply frame received for 3\nI0915 11:02:32.031097 1763 log.go:181] (0xc000a300b0) (0xc0005f00a0) Create stream\nI0915 11:02:32.031105 1763 log.go:181] (0xc000a300b0) (0xc0005f00a0) Stream added, broadcasting: 5\nI0915 11:02:32.031795 1763 log.go:181] (0xc000a300b0) Reply frame received for 5\nI0915 11:02:32.104267 1763 log.go:181] (0xc000a300b0) Data frame received for 3\nI0915 11:02:32.104304 1763 log.go:181] (0xc0005f0000) (3) Data frame handling\nI0915 11:02:32.104321 1763 log.go:181] (0xc000a300b0) Data frame received for 5\nI0915 11:02:32.104335 1763 log.go:181] (0xc0005f00a0) (5) Data frame handling\nI0915 11:02:32.104346 1763 log.go:181] (0xc0005f00a0) (5) Data frame sent\nI0915 11:02:32.104355 1763 log.go:181] (0xc000a300b0) Data frame received for 5\nI0915 11:02:32.104360 1763 log.go:181] (0xc0005f00a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0915 11:02:32.104376 1763 log.go:181] (0xc0005f0000) (3) Data frame sent\nI0915 11:02:32.104486 1763 log.go:181] (0xc000a300b0) Data frame received for 3\nI0915 11:02:32.104514 1763 log.go:181] (0xc0005f0000) (3) Data frame handling\nI0915 11:02:32.105809 1763 log.go:181] (0xc000a300b0) Data frame received for 1\nI0915 11:02:32.105840 1763 log.go:181] (0xc0008d41e0) (1) Data frame handling\nI0915 11:02:32.105855 1763 log.go:181] (0xc0008d41e0) (1) Data frame sent\nI0915 11:02:32.105869 1763 log.go:181] (0xc000a300b0) (0xc0008d41e0) Stream removed, broadcasting: 1\nI0915 11:02:32.105885 1763 log.go:181] (0xc000a300b0) Go away received\nI0915 11:02:32.106226 1763 log.go:181] (0xc000a300b0) (0xc0008d41e0) Stream removed, broadcasting: 1\nI0915 11:02:32.106248 1763 log.go:181] (0xc000a300b0) (0xc0005f0000) Stream removed, broadcasting: 3\nI0915 11:02:32.106257 1763 log.go:181] (0xc000a300b0) (0xc0005f00a0) Stream removed, broadcasting: 5\n" Sep 15 11:02:32.111: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 15 11:02:32.111: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 15 11:02:32.111: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 15 11:03:12.128: INFO: Deleting all statefulset in ns statefulset-359 Sep 15 11:03:12.131: INFO: Scaling statefulset ss to 0 Sep 15 11:03:12.143: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 11:03:12.145: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:03:12.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-359" for this suite. • [SLOW TEST:102.508 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":113,"skipped":1886,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:03:12.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:03:12.897: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:03:14.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764592, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764592, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764593, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764592, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:03:17.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:03:18.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5795" for this suite. STEP: Destroying namespace "webhook-5795-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.074 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":114,"skipped":1891,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:03:18.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-xfb6 STEP: Creating a pod to test atomic-volume-subpath Sep 15 11:03:18.370: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xfb6" in namespace "subpath-8910" to be "Succeeded or Failed" Sep 15 11:03:18.388: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.366158ms Sep 15 11:03:20.418: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048332434s Sep 15 11:03:22.428: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.058864199s Sep 15 11:03:24.434: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 6.064053196s Sep 15 11:03:26.438: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.068266255s Sep 15 11:03:28.443: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 10.073340632s Sep 15 11:03:30.448: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 12.078729911s Sep 15 11:03:32.453: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 14.083255192s Sep 15 11:03:34.458: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 16.088208825s Sep 15 11:03:36.463: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 18.093017954s Sep 15 11:03:38.468: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 20.098312925s Sep 15 11:03:40.473: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Running", Reason="", readiness=true. Elapsed: 22.103715569s Sep 15 11:03:42.478: INFO: Pod "pod-subpath-test-projected-xfb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.10820247s STEP: Saw pod success Sep 15 11:03:42.478: INFO: Pod "pod-subpath-test-projected-xfb6" satisfied condition "Succeeded or Failed" Sep 15 11:03:42.481: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-xfb6 container test-container-subpath-projected-xfb6: STEP: delete the pod Sep 15 11:03:42.642: INFO: Waiting for pod pod-subpath-test-projected-xfb6 to disappear Sep 15 11:03:42.681: INFO: Pod pod-subpath-test-projected-xfb6 no longer exists STEP: Deleting pod pod-subpath-test-projected-xfb6 Sep 15 11:03:42.681: INFO: Deleting pod "pod-subpath-test-projected-xfb6" in namespace "subpath-8910" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:03:42.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8910" for this suite. • [SLOW TEST:24.468 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":115,"skipped":1908,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:03:42.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:03:42.800: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 15 11:03:42.816: INFO: Number of nodes with available pods: 0 Sep 15 11:03:42.816: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 15 11:03:42.856: INFO: Number of nodes with available pods: 0 Sep 15 11:03:42.856: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:43.939: INFO: Number of nodes with available pods: 0 Sep 15 11:03:43.939: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:44.861: INFO: Number of nodes with available pods: 0 Sep 15 11:03:44.861: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:45.873: INFO: Number of nodes with available pods: 0 Sep 15 11:03:45.873: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:46.862: INFO: Number of nodes with available pods: 1 Sep 15 11:03:46.862: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 15 11:03:46.891: INFO: Number of nodes with available pods: 1 Sep 15 11:03:46.891: INFO: Number of running nodes: 0, number of available pods: 1 Sep 15 11:03:47.896: INFO: Number of nodes with available pods: 0 Sep 15 11:03:47.897: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 15 11:03:47.938: INFO: Number of nodes with available pods: 0 Sep 15 11:03:47.938: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:48.943: INFO: Number of nodes with available pods: 0 Sep 15 11:03:48.943: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:49.943: INFO: Number of nodes with available pods: 0 Sep 15 11:03:49.943: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:50.944: INFO: Number of nodes with available pods: 0 Sep 15 11:03:50.944: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:51.942: INFO: Number of nodes with available pods: 0 Sep 15 11:03:51.942: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:52.943: INFO: Number of nodes with available pods: 0 Sep 15 11:03:52.943: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:53.993: INFO: Number of nodes with available pods: 0 Sep 15 11:03:53.993: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:54.942: INFO: Number of nodes with available pods: 0 Sep 15 11:03:54.942: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:55.942: INFO: Number of nodes with available pods: 0 Sep 15 11:03:55.942: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:03:56.943: INFO: Number of nodes with available pods: 1 Sep 15 11:03:56.943: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1920, will wait for the garbage collector to delete the pods Sep 15 11:03:57.009: INFO: Deleting DaemonSet.extensions daemon-set took: 6.922334ms Sep 15 11:03:57.409: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.246621ms Sep 15 11:04:03.312: INFO: Number of nodes with available pods: 0 Sep 15 11:04:03.313: INFO: Number of running nodes: 0, number of available pods: 0 Sep 15 11:04:03.315: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1920/daemonsets","resourceVersion":"442568"},"items":null} Sep 15 11:04:03.318: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1920/pods","resourceVersion":"442568"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:04:03.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1920" for this suite. • [SLOW TEST:20.684 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":116,"skipped":1929,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:04:03.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-cf20c194-fc49-4802-8696-096ca22cff9f STEP: Creating a pod to test consume configMaps Sep 15 11:04:04.616: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e257b9a9-173d-49a6-bcb7-5043d5483bf5" in namespace "projected-552" to be "Succeeded or Failed" Sep 15 11:04:04.825: INFO: Pod "pod-projected-configmaps-e257b9a9-173d-49a6-bcb7-5043d5483bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 209.36291ms Sep 15 11:04:06.837: INFO: Pod "pod-projected-configmaps-e257b9a9-173d-49a6-bcb7-5043d5483bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221583955s Sep 15 11:04:08.862: INFO: Pod "pod-projected-configmaps-e257b9a9-173d-49a6-bcb7-5043d5483bf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.245814888s STEP: Saw pod success Sep 15 11:04:08.862: INFO: Pod "pod-projected-configmaps-e257b9a9-173d-49a6-bcb7-5043d5483bf5" satisfied condition "Succeeded or Failed" Sep 15 11:04:08.865: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-e257b9a9-173d-49a6-bcb7-5043d5483bf5 container projected-configmap-volume-test: STEP: delete the pod Sep 15 11:04:08.910: INFO: Waiting for pod pod-projected-configmaps-e257b9a9-173d-49a6-bcb7-5043d5483bf5 to disappear Sep 15 11:04:08.915: INFO: Pod pod-projected-configmaps-e257b9a9-173d-49a6-bcb7-5043d5483bf5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:04:08.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-552" for this suite. • [SLOW TEST:5.528 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":117,"skipped":1949,"failed":0} [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:04:08.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-857 STEP: creating service affinity-clusterip in namespace services-857 STEP: creating replication controller affinity-clusterip in namespace services-857 I0915 11:04:09.161409 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-857, replica count: 3 I0915 11:04:12.211797 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 11:04:15.212056 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 11:04:15.218: INFO: Creating new exec pod Sep 15 11:04:20.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-857 execpod-affinity6gf9m -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Sep 15 11:04:23.433: INFO: stderr: "I0915 11:04:23.340925 1781 log.go:181] (0xc0004e8c60) (0xc000c1a3c0) Create stream\nI0915 11:04:23.341000 1781 log.go:181] (0xc0004e8c60) (0xc000c1a3c0) Stream added, broadcasting: 1\nI0915 11:04:23.344716 1781 log.go:181] (0xc0004e8c60) Reply frame received for 1\nI0915 11:04:23.344773 1781 log.go:181] (0xc0004e8c60) (0xc00017e000) Create stream\nI0915 11:04:23.344785 1781 log.go:181] (0xc0004e8c60) (0xc00017e000) Stream added, broadcasting: 3\nI0915 11:04:23.345921 1781 log.go:181] (0xc0004e8c60) Reply frame received for 3\nI0915 11:04:23.345955 1781 log.go:181] (0xc0004e8c60) (0xc00078e140) Create stream\nI0915 11:04:23.345968 1781 log.go:181] (0xc0004e8c60) (0xc00078e140) Stream added, broadcasting: 5\nI0915 11:04:23.346953 1781 log.go:181] (0xc0004e8c60) Reply frame received for 5\nI0915 11:04:23.425361 1781 log.go:181] (0xc0004e8c60) Data frame received for 5\nI0915 11:04:23.425399 1781 log.go:181] (0xc00078e140) (5) Data frame handling\nI0915 11:04:23.425435 1781 log.go:181] (0xc00078e140) (5) Data frame sent\nI0915 11:04:23.425453 1781 log.go:181] (0xc0004e8c60) Data frame received for 5\nI0915 11:04:23.425469 1781 log.go:181] (0xc00078e140) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0915 11:04:23.425506 1781 log.go:181] (0xc00078e140) (5) Data frame sent\nI0915 11:04:23.425815 1781 log.go:181] (0xc0004e8c60) Data frame received for 5\nI0915 11:04:23.425851 1781 log.go:181] (0xc00078e140) (5) Data frame handling\nI0915 11:04:23.426285 1781 log.go:181] (0xc0004e8c60) Data frame received for 3\nI0915 11:04:23.426309 1781 log.go:181] (0xc00017e000) (3) Data frame handling\nI0915 11:04:23.427899 1781 log.go:181] (0xc0004e8c60) Data frame received for 1\nI0915 11:04:23.427941 1781 log.go:181] (0xc000c1a3c0) (1) Data frame handling\nI0915 11:04:23.427980 1781 log.go:181] (0xc000c1a3c0) (1) Data frame sent\nI0915 11:04:23.428304 1781 log.go:181] (0xc0004e8c60) (0xc000c1a3c0) Stream removed, broadcasting: 1\nI0915 11:04:23.428348 1781 log.go:181] (0xc0004e8c60) Go away received\nI0915 11:04:23.428823 1781 log.go:181] (0xc0004e8c60) (0xc000c1a3c0) Stream removed, broadcasting: 1\nI0915 11:04:23.428848 1781 log.go:181] (0xc0004e8c60) (0xc00017e000) Stream removed, broadcasting: 3\nI0915 11:04:23.428861 1781 log.go:181] (0xc0004e8c60) (0xc00078e140) Stream removed, broadcasting: 5\n" Sep 15 11:04:23.433: INFO: stdout: "" Sep 15 11:04:23.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-857 execpod-affinity6gf9m -- /bin/sh -x -c nc -zv -t -w 2 10.111.182.11 80' Sep 15 11:04:23.622: INFO: stderr: "I0915 11:04:23.567735 1801 log.go:181] (0xc0007253f0) (0xc0006ccc80) Create stream\nI0915 11:04:23.567782 1801 log.go:181] (0xc0007253f0) (0xc0006ccc80) Stream added, broadcasting: 1\nI0915 11:04:23.570187 1801 log.go:181] (0xc0007253f0) Reply frame received for 1\nI0915 11:04:23.570216 1801 log.go:181] (0xc0007253f0) (0xc0007be1e0) Create stream\nI0915 11:04:23.570242 1801 log.go:181] (0xc0007253f0) (0xc0007be1e0) Stream added, broadcasting: 3\nI0915 11:04:23.571155 1801 log.go:181] (0xc0007253f0) Reply frame received for 3\nI0915 11:04:23.571204 1801 log.go:181] (0xc0007253f0) (0xc000cda320) Create stream\nI0915 11:04:23.571218 1801 log.go:181] (0xc0007253f0) (0xc000cda320) Stream added, broadcasting: 5\nI0915 11:04:23.572100 1801 log.go:181] (0xc0007253f0) Reply frame received for 5\nI0915 11:04:23.617053 1801 log.go:181] (0xc0007253f0) Data frame received for 5\nI0915 11:04:23.617102 1801 log.go:181] (0xc000cda320) (5) Data frame handling\nI0915 11:04:23.617112 1801 log.go:181] (0xc000cda320) (5) Data frame sent\nI0915 11:04:23.617119 1801 log.go:181] (0xc0007253f0) Data frame received for 5\nI0915 11:04:23.617125 1801 log.go:181] (0xc000cda320) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.182.11 80\nConnection to 10.111.182.11 80 port [tcp/http] succeeded!\nI0915 11:04:23.617145 1801 log.go:181] (0xc0007253f0) Data frame received for 3\nI0915 11:04:23.617151 1801 log.go:181] (0xc0007be1e0) (3) Data frame handling\nI0915 11:04:23.618161 1801 log.go:181] (0xc0007253f0) Data frame received for 1\nI0915 11:04:23.618193 1801 log.go:181] (0xc0006ccc80) (1) Data frame handling\nI0915 11:04:23.618213 1801 log.go:181] (0xc0006ccc80) (1) Data frame sent\nI0915 11:04:23.618242 1801 log.go:181] (0xc0007253f0) (0xc0006ccc80) Stream removed, broadcasting: 1\nI0915 11:04:23.618261 1801 log.go:181] (0xc0007253f0) Go away received\nI0915 11:04:23.618575 1801 log.go:181] (0xc0007253f0) (0xc0006ccc80) Stream removed, broadcasting: 1\nI0915 11:04:23.618591 1801 log.go:181] (0xc0007253f0) (0xc0007be1e0) Stream removed, broadcasting: 3\nI0915 11:04:23.618598 1801 log.go:181] (0xc0007253f0) (0xc000cda320) Stream removed, broadcasting: 5\n" Sep 15 11:04:23.622: INFO: stdout: "" Sep 15 11:04:23.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-857 execpod-affinity6gf9m -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.182.11:80/ ; done' Sep 15 11:04:23.903: INFO: stderr: "I0915 11:04:23.743479 1819 log.go:181] (0xc000f153f0) (0xc000ad6820) Create stream\nI0915 11:04:23.743533 1819 log.go:181] (0xc000f153f0) (0xc000ad6820) Stream added, broadcasting: 1\nI0915 11:04:23.749614 1819 log.go:181] (0xc000f153f0) Reply frame received for 1\nI0915 11:04:23.749670 1819 log.go:181] (0xc000f153f0) (0xc000c6c000) Create stream\nI0915 11:04:23.749692 1819 log.go:181] (0xc000f153f0) (0xc000c6c000) Stream added, broadcasting: 3\nI0915 11:04:23.750639 1819 log.go:181] (0xc000f153f0) Reply frame received for 3\nI0915 11:04:23.750677 1819 log.go:181] (0xc000f153f0) (0xc000c6c140) Create stream\nI0915 11:04:23.750687 1819 log.go:181] (0xc000f153f0) (0xc000c6c140) Stream added, broadcasting: 5\nI0915 11:04:23.751506 1819 log.go:181] (0xc000f153f0) Reply frame received for 5\nI0915 11:04:23.818863 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.818894 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.818903 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.818919 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.818925 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.818931 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.821851 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.821872 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.821887 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.822318 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.822337 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.822354 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.822364 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.822376 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.822382 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.825924 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.825946 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.825966 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.826470 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.826490 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.826503 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.826526 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.826534 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.826540 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.830178 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.830199 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.830219 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.830799 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.830826 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.830839 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.830852 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.830858 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.830864 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.835776 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.835806 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.835825 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.836450 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.836479 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.836489 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.836498 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.836505 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.836514 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.841079 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.841111 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.841137 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.841720 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.841737 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.841747 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.841768 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.841794 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.841810 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.847389 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.847401 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.847407 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.848259 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.848280 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.848296 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.848371 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.848393 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.848410 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.852948 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.852978 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.853009 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.853729 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.853753 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.853774 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.853843 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.853857 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.853870 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.858849 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.858953 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.858998 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.859455 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.859484 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.859497 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.859514 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.859522 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.859530 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\nI0915 11:04:23.859538 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.859545 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.859628 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\nI0915 11:04:23.865128 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.865157 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.865178 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.865719 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.865742 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.865762 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.865840 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.865863 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.865886 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.869714 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.869734 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.869755 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.870530 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.870551 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.870569 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.872783 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.872798 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.872808 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.875241 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.875258 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.875272 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.875731 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.875751 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.875770 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.875778 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.875801 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.875834 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.879129 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.879146 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.879157 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.879446 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.879461 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.879470 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.879479 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.879484 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.879489 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.884068 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.884079 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.884087 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.884732 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.884760 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.884771 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.884787 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.884795 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.884802 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.888427 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.888439 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.888448 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.888954 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.888963 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.888969 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.889074 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.889094 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.889127 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.893576 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.893587 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.893595 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.894118 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.894134 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.894140 1819 log.go:181] (0xc000c6c140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.182.11:80/\nI0915 11:04:23.894166 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.894189 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.894204 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.897515 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.897542 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.897557 1819 log.go:181] (0xc000c6c000) (3) Data frame sent\nI0915 11:04:23.897985 1819 log.go:181] (0xc000f153f0) Data frame received for 3\nI0915 11:04:23.898012 1819 log.go:181] (0xc000c6c000) (3) Data frame handling\nI0915 11:04:23.898090 1819 log.go:181] (0xc000f153f0) Data frame received for 5\nI0915 11:04:23.898109 1819 log.go:181] (0xc000c6c140) (5) Data frame handling\nI0915 11:04:23.899230 1819 log.go:181] (0xc000f153f0) Data frame received for 1\nI0915 11:04:23.899241 1819 log.go:181] (0xc000ad6820) (1) Data frame handling\nI0915 11:04:23.899250 1819 log.go:181] (0xc000ad6820) (1) Data frame sent\nI0915 11:04:23.899357 1819 log.go:181] (0xc000f153f0) (0xc000ad6820) Stream removed, broadcasting: 1\nI0915 11:04:23.899378 1819 log.go:181] (0xc000f153f0) Go away received\nI0915 11:04:23.899682 1819 log.go:181] (0xc000f153f0) (0xc000ad6820) Stream removed, broadcasting: 1\nI0915 11:04:23.899694 1819 log.go:181] (0xc000f153f0) (0xc000c6c000) Stream removed, broadcasting: 3\nI0915 11:04:23.899699 1819 log.go:181] (0xc000f153f0) (0xc000c6c140) Stream removed, broadcasting: 5\n" Sep 15 11:04:23.903: INFO: stdout: "\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798\naffinity-clusterip-ns798" Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Received response from host: affinity-clusterip-ns798 Sep 15 11:04:23.903: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-857, will wait for the garbage collector to delete the pods Sep 15 11:04:24.105: INFO: Deleting ReplicationController affinity-clusterip took: 84.365747ms Sep 15 11:04:24.505: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.214177ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:04:33.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-857" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:24.344 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":118,"skipped":1949,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:04:33.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 15 11:04:33.315: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:04:40.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6036" for this suite. • [SLOW TEST:7.166 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":119,"skipped":1953,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:04:40.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3fd17d98-2bb7-430c-b7a7-e709b11f4422 STEP: Creating a pod to test consume configMaps Sep 15 11:04:40.870: INFO: Waiting up to 5m0s for pod "pod-configmaps-3dc4c41c-8d85-4b64-8941-dd12e4f55c18" in namespace "configmap-5437" to be "Succeeded or Failed" Sep 15 11:04:40.875: INFO: Pod "pod-configmaps-3dc4c41c-8d85-4b64-8941-dd12e4f55c18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299981ms Sep 15 11:04:42.879: INFO: Pod "pod-configmaps-3dc4c41c-8d85-4b64-8941-dd12e4f55c18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008392709s Sep 15 11:04:44.883: INFO: Pod "pod-configmaps-3dc4c41c-8d85-4b64-8941-dd12e4f55c18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012667607s STEP: Saw pod success Sep 15 11:04:44.883: INFO: Pod "pod-configmaps-3dc4c41c-8d85-4b64-8941-dd12e4f55c18" satisfied condition "Succeeded or Failed" Sep 15 11:04:44.886: INFO: Trying to get logs from node kali-worker pod pod-configmaps-3dc4c41c-8d85-4b64-8941-dd12e4f55c18 container configmap-volume-test: STEP: delete the pod Sep 15 11:04:44.922: INFO: Waiting for pod pod-configmaps-3dc4c41c-8d85-4b64-8941-dd12e4f55c18 to disappear Sep 15 11:04:44.941: INFO: Pod pod-configmaps-3dc4c41c-8d85-4b64-8941-dd12e4f55c18 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:04:44.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5437" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":1963,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:04:44.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 11:04:45.025: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb51b9f8-c9e9-4148-92ac-1972124209f7" in namespace "projected-3536" to be "Succeeded or Failed" Sep 15 11:04:45.037: INFO: Pod "downwardapi-volume-fb51b9f8-c9e9-4148-92ac-1972124209f7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.41068ms Sep 15 11:04:47.041: INFO: Pod "downwardapi-volume-fb51b9f8-c9e9-4148-92ac-1972124209f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015503474s Sep 15 11:04:49.045: INFO: Pod "downwardapi-volume-fb51b9f8-c9e9-4148-92ac-1972124209f7": Phase="Running", Reason="", readiness=true. Elapsed: 4.019696598s Sep 15 11:04:51.049: INFO: Pod "downwardapi-volume-fb51b9f8-c9e9-4148-92ac-1972124209f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023697431s STEP: Saw pod success Sep 15 11:04:51.049: INFO: Pod "downwardapi-volume-fb51b9f8-c9e9-4148-92ac-1972124209f7" satisfied condition "Succeeded or Failed" Sep 15 11:04:51.052: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-fb51b9f8-c9e9-4148-92ac-1972124209f7 container client-container: STEP: delete the pod Sep 15 11:04:51.109: INFO: Waiting for pod downwardapi-volume-fb51b9f8-c9e9-4148-92ac-1972124209f7 to disappear Sep 15 11:04:51.114: INFO: Pod downwardapi-volume-fb51b9f8-c9e9-4148-92ac-1972124209f7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:04:51.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3536" for this suite. • [SLOW TEST:6.173 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":121,"skipped":1980,"failed":0} [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:04:51.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Sep 15 11:04:51.186: INFO: created test-pod-1 Sep 15 11:04:51.207: INFO: created test-pod-2 Sep 15 11:04:51.252: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:04:51.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7477" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":122,"skipped":1980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:04:51.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:04:51.615: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-5077b201-3da9-4baa-90c1-481e2e7d7339" in namespace "security-context-test-4932" to be "Succeeded or Failed" Sep 15 11:04:51.671: INFO: Pod "busybox-privileged-false-5077b201-3da9-4baa-90c1-481e2e7d7339": Phase="Pending", Reason="", readiness=false. Elapsed: 55.555143ms Sep 15 11:04:53.688: INFO: Pod "busybox-privileged-false-5077b201-3da9-4baa-90c1-481e2e7d7339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073349428s Sep 15 11:04:55.694: INFO: Pod "busybox-privileged-false-5077b201-3da9-4baa-90c1-481e2e7d7339": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079163265s Sep 15 11:04:57.736: INFO: Pod "busybox-privileged-false-5077b201-3da9-4baa-90c1-481e2e7d7339": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121180488s Sep 15 11:04:57.736: INFO: Pod "busybox-privileged-false-5077b201-3da9-4baa-90c1-481e2e7d7339" satisfied condition "Succeeded or Failed" Sep 15 11:04:57.778: INFO: Got logs for pod "busybox-privileged-false-5077b201-3da9-4baa-90c1-481e2e7d7339": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:04:57.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4932" for this suite. • [SLOW TEST:6.230 seconds] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":123,"skipped":2030,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:04:57.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 15 11:04:58.015: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Sep 15 11:04:58.018: INFO: starting watch STEP: patching STEP: updating Sep 15 11:04:58.030: INFO: waiting for watch events with expected annotations Sep 15 11:04:58.031: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:04:58.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-2946" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":124,"skipped":2044,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:04:58.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 11:04:58.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-304753ae-fe29-4cea-9cf1-e9634d563c6a" in namespace "downward-api-4448" to be "Succeeded or Failed" Sep 15 11:04:58.216: INFO: Pod "downwardapi-volume-304753ae-fe29-4cea-9cf1-e9634d563c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.869732ms Sep 15 11:05:00.483: INFO: Pod "downwardapi-volume-304753ae-fe29-4cea-9cf1-e9634d563c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280361475s Sep 15 11:05:02.487: INFO: Pod "downwardapi-volume-304753ae-fe29-4cea-9cf1-e9634d563c6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.284247678s STEP: Saw pod success Sep 15 11:05:02.487: INFO: Pod "downwardapi-volume-304753ae-fe29-4cea-9cf1-e9634d563c6a" satisfied condition "Succeeded or Failed" Sep 15 11:05:02.490: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-304753ae-fe29-4cea-9cf1-e9634d563c6a container client-container: STEP: delete the pod Sep 15 11:05:02.552: INFO: Waiting for pod downwardapi-volume-304753ae-fe29-4cea-9cf1-e9634d563c6a to disappear Sep 15 11:05:02.564: INFO: Pod downwardapi-volume-304753ae-fe29-4cea-9cf1-e9634d563c6a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:05:02.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4448" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":2099,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:05:02.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:05:18.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6497" for this suite. • [SLOW TEST:16.341 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":126,"skipped":2101,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:05:18.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Sep 15 11:05:19.011: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5950 /api/v1/namespaces/watch-5950/configmaps/e2e-watch-test-label-changed d54f88cf-9782-42f5-9df9-2b6546098bc2 443146 0 2020-09-15 11:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-15 11:05:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:05:19.011: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5950 /api/v1/namespaces/watch-5950/configmaps/e2e-watch-test-label-changed d54f88cf-9782-42f5-9df9-2b6546098bc2 443147 0 2020-09-15 11:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-15 11:05:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:05:19.011: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5950 /api/v1/namespaces/watch-5950/configmaps/e2e-watch-test-label-changed d54f88cf-9782-42f5-9df9-2b6546098bc2 443148 0 2020-09-15 11:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-15 11:05:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Sep 15 11:05:29.051: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5950 /api/v1/namespaces/watch-5950/configmaps/e2e-watch-test-label-changed d54f88cf-9782-42f5-9df9-2b6546098bc2 443185 0 2020-09-15 11:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-15 11:05:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:05:29.051: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5950 /api/v1/namespaces/watch-5950/configmaps/e2e-watch-test-label-changed d54f88cf-9782-42f5-9df9-2b6546098bc2 443186 0 2020-09-15 11:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-15 11:05:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:05:29.051: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5950 /api/v1/namespaces/watch-5950/configmaps/e2e-watch-test-label-changed d54f88cf-9782-42f5-9df9-2b6546098bc2 443187 0 2020-09-15 11:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-15 11:05:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:05:29.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5950" for this suite. • [SLOW TEST:10.171 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":127,"skipped":2103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:05:29.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Sep 15 11:05:29.150: INFO: Major version: 1 STEP: Confirm minor version Sep 15 11:05:29.150: INFO: cleanMinorVersion: 19 Sep 15 11:05:29.150: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:05:29.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-592" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":128,"skipped":2147,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:05:29.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:05:29.278: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:05:33.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2738" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":129,"skipped":2159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:05:33.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Sep 15 11:05:33.447: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Sep 15 11:05:34.102: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Sep 15 11:05:36.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:05:38.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:05:40.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:05:42.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:05:44.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:05:46.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:05:48.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:05:51.231: INFO: Waited 927.978661ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:05:51.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-69" for this suite. • [SLOW TEST:18.682 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":130,"skipped":2184,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:05:52.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Sep 15 11:05:52.275: INFO: Waiting up to 5m0s for pod "client-containers-6180c102-cd0f-4c59-9be3-7cc16ed7a6f2" in namespace "containers-3440" to be "Succeeded or Failed" Sep 15 11:05:52.433: INFO: Pod "client-containers-6180c102-cd0f-4c59-9be3-7cc16ed7a6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 158.652886ms Sep 15 11:05:54.438: INFO: Pod "client-containers-6180c102-cd0f-4c59-9be3-7cc16ed7a6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163834808s Sep 15 11:05:56.443: INFO: Pod "client-containers-6180c102-cd0f-4c59-9be3-7cc16ed7a6f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168594414s STEP: Saw pod success Sep 15 11:05:56.443: INFO: Pod "client-containers-6180c102-cd0f-4c59-9be3-7cc16ed7a6f2" satisfied condition "Succeeded or Failed" Sep 15 11:05:56.446: INFO: Trying to get logs from node kali-worker2 pod client-containers-6180c102-cd0f-4c59-9be3-7cc16ed7a6f2 container test-container: STEP: delete the pod Sep 15 11:05:56.482: INFO: Waiting for pod client-containers-6180c102-cd0f-4c59-9be3-7cc16ed7a6f2 to disappear Sep 15 11:05:56.488: INFO: Pod client-containers-6180c102-cd0f-4c59-9be3-7cc16ed7a6f2 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:05:56.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3440" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":2202,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:05:56.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:05:56.601: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Sep 15 11:06:01.604: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 15 11:06:01.604: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 15 11:06:05.743: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2566 /apis/apps/v1/namespaces/deployment-2566/deployments/test-cleanup-deployment 87e2eefd-97d0-4b5d-a3a7-040e50bcb06e 443481 1 2020-09-15 11:06:01 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-09-15 11:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-15 11:06:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00314ec18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-15 11:06:01 +0000 UTC,LastTransitionTime:2020-09-15 11:06:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5d446bdd47" has successfully progressed.,LastUpdateTime:2020-09-15 11:06:05 +0000 UTC,LastTransitionTime:2020-09-15 11:06:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 15 11:06:05.753: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-2566 /apis/apps/v1/namespaces/deployment-2566/replicasets/test-cleanup-deployment-5d446bdd47 50309198-4501-48a8-869c-50406299ba7c 443470 1 2020-09-15 11:06:01 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 87e2eefd-97d0-4b5d-a3a7-040e50bcb06e 0xc00314f067 0xc00314f068}] [] [{kube-controller-manager Update apps/v1 2020-09-15 11:06:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"87e2eefd-97d0-4b5d-a3a7-040e50bcb06e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00314f108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 15 11:06:05.777: INFO: Pod "test-cleanup-deployment-5d446bdd47-9n7qc" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-9n7qc test-cleanup-deployment-5d446bdd47- deployment-2566 /api/v1/namespaces/deployment-2566/pods/test-cleanup-deployment-5d446bdd47-9n7qc 9140a03e-ff33-47e6-b045-92c17bad1a60 443469 0 2020-09-15 11:06:01 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 50309198-4501-48a8-869c-50406299ba7c 0xc005d7afb7 0xc005d7afb8}] [] [{kube-controller-manager Update v1 2020-09-15 11:06:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50309198-4501-48a8-869c-50406299ba7c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 11:06:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.111\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pxhfb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pxhfb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pxhfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 11:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 11:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 11:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 11:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.111,StartTime:2020-09-15 11:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 11:06:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://d983a34760488c1f33e3bf3e0787f166f4deec7abcb5356d2356cda62b9a23ba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.111,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:06:05.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2566" for this suite. • [SLOW TEST:9.269 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":132,"skipped":2215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:06:05.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Sep 15 11:06:05.946: INFO: Waiting up to 5m0s for pod "var-expansion-7469b28a-713b-4217-ac2e-8f3efaa8b838" in namespace "var-expansion-4944" to be "Succeeded or Failed" Sep 15 11:06:06.109: INFO: Pod "var-expansion-7469b28a-713b-4217-ac2e-8f3efaa8b838": Phase="Pending", Reason="", readiness=false. Elapsed: 162.726878ms Sep 15 11:06:08.138: INFO: Pod "var-expansion-7469b28a-713b-4217-ac2e-8f3efaa8b838": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191999822s Sep 15 11:06:10.142: INFO: Pod "var-expansion-7469b28a-713b-4217-ac2e-8f3efaa8b838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.196340025s STEP: Saw pod success Sep 15 11:06:10.142: INFO: Pod "var-expansion-7469b28a-713b-4217-ac2e-8f3efaa8b838" satisfied condition "Succeeded or Failed" Sep 15 11:06:10.146: INFO: Trying to get logs from node kali-worker2 pod var-expansion-7469b28a-713b-4217-ac2e-8f3efaa8b838 container dapi-container: STEP: delete the pod Sep 15 11:06:10.160: INFO: Waiting for pod var-expansion-7469b28a-713b-4217-ac2e-8f3efaa8b838 to disappear Sep 15 11:06:10.182: INFO: Pod var-expansion-7469b28a-713b-4217-ac2e-8f3efaa8b838 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:06:10.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4944" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":133,"skipped":2283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:06:10.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4338.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4338.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4338.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4338.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4338.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4338.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 15 11:06:16.362: INFO: DNS probes using dns-4338/dns-test-c962364f-9672-4220-a317-f5ac96a71b18 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:06:16.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4338" for this suite. • [SLOW TEST:6.304 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":134,"skipped":2319,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:06:16.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1099 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1099 STEP: creating replication controller externalsvc in namespace services-1099 I0915 11:06:17.239636 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1099, replica count: 2 I0915 11:06:20.290096 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 11:06:23.290316 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Sep 15 11:06:23.358: INFO: Creating new exec pod Sep 15 11:06:27.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-1099 execpodhd44r -- /bin/sh -x -c nslookup nodeport-service.services-1099.svc.cluster.local' Sep 15 11:06:27.625: INFO: stderr: "I0915 11:06:27.525821 1837 log.go:181] (0xc00003bad0) (0xc000d868c0) Create stream\nI0915 11:06:27.525894 1837 log.go:181] (0xc00003bad0) (0xc000d868c0) Stream added, broadcasting: 1\nI0915 11:06:27.532846 1837 log.go:181] (0xc00003bad0) Reply frame received for 1\nI0915 11:06:27.532888 1837 log.go:181] (0xc00003bad0) (0xc000d860a0) Create stream\nI0915 11:06:27.532899 1837 log.go:181] (0xc00003bad0) (0xc000d860a0) Stream added, broadcasting: 3\nI0915 11:06:27.533733 1837 log.go:181] (0xc00003bad0) Reply frame received for 3\nI0915 11:06:27.533770 1837 log.go:181] (0xc00003bad0) (0xc0006bf040) Create stream\nI0915 11:06:27.533780 1837 log.go:181] (0xc00003bad0) (0xc0006bf040) Stream added, broadcasting: 5\nI0915 11:06:27.534523 1837 log.go:181] (0xc00003bad0) Reply frame received for 5\nI0915 11:06:27.604247 1837 log.go:181] (0xc00003bad0) Data frame received for 5\nI0915 11:06:27.604276 1837 log.go:181] (0xc0006bf040) (5) Data frame handling\nI0915 11:06:27.604294 1837 log.go:181] (0xc0006bf040) (5) Data frame sent\n+ nslookup nodeport-service.services-1099.svc.cluster.local\nI0915 11:06:27.616033 1837 log.go:181] (0xc00003bad0) Data frame received for 3\nI0915 11:06:27.616074 1837 log.go:181] (0xc000d860a0) (3) Data frame handling\nI0915 11:06:27.616108 1837 log.go:181] (0xc000d860a0) (3) Data frame sent\nI0915 11:06:27.617280 1837 log.go:181] (0xc00003bad0) Data frame received for 3\nI0915 11:06:27.617297 1837 log.go:181] (0xc000d860a0) (3) Data frame handling\nI0915 11:06:27.617313 1837 log.go:181] (0xc000d860a0) (3) Data frame sent\nI0915 11:06:27.617884 1837 log.go:181] (0xc00003bad0) Data frame received for 3\nI0915 11:06:27.617901 1837 log.go:181] (0xc000d860a0) (3) Data frame handling\nI0915 11:06:27.617923 1837 log.go:181] (0xc00003bad0) Data frame received for 5\nI0915 11:06:27.617948 1837 log.go:181] (0xc0006bf040) (5) Data frame handling\nI0915 11:06:27.621740 1837 log.go:181] (0xc00003bad0) Data frame received for 1\nI0915 11:06:27.621754 1837 log.go:181] (0xc000d868c0) (1) Data frame handling\nI0915 11:06:27.621761 1837 log.go:181] (0xc000d868c0) (1) Data frame sent\nI0915 11:06:27.621885 1837 log.go:181] (0xc00003bad0) (0xc000d868c0) Stream removed, broadcasting: 1\nI0915 11:06:27.621906 1837 log.go:181] (0xc00003bad0) Go away received\nI0915 11:06:27.622155 1837 log.go:181] (0xc00003bad0) (0xc000d868c0) Stream removed, broadcasting: 1\nI0915 11:06:27.622169 1837 log.go:181] (0xc00003bad0) (0xc000d860a0) Stream removed, broadcasting: 3\nI0915 11:06:27.622175 1837 log.go:181] (0xc00003bad0) (0xc0006bf040) Stream removed, broadcasting: 5\n" Sep 15 11:06:27.625: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1099.svc.cluster.local\tcanonical name = externalsvc.services-1099.svc.cluster.local.\nName:\texternalsvc.services-1099.svc.cluster.local\nAddress: 10.100.76.195\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1099, will wait for the garbage collector to delete the pods Sep 15 11:06:27.684: INFO: Deleting ReplicationController externalsvc took: 5.576953ms Sep 15 11:06:27.784: INFO: Terminating ReplicationController externalsvc pods took: 100.251565ms Sep 15 11:06:43.335: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:06:43.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1099" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:26.905 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":135,"skipped":2325,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:06:43.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:06:44.563: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:06:46.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764804, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764804, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764804, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735764804, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:06:49.617: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Sep 15 11:06:49.639: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:06:49.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1568" for this suite. STEP: Destroying namespace "webhook-1568-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.448 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":136,"skipped":2335,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:06:49.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:06:49.925: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Sep 15 11:06:52.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5714 create -f -' Sep 15 11:06:56.495: INFO: stderr: "" Sep 15 11:06:56.495: INFO: stdout: "e2e-test-crd-publish-openapi-7792-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 15 11:06:56.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5714 delete e2e-test-crd-publish-openapi-7792-crds test-foo' Sep 15 11:06:56.622: INFO: stderr: "" Sep 15 11:06:56.622: INFO: stdout: "e2e-test-crd-publish-openapi-7792-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Sep 15 11:06:56.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5714 apply -f -' Sep 15 11:06:56.884: INFO: stderr: "" Sep 15 11:06:56.884: INFO: stdout: "e2e-test-crd-publish-openapi-7792-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 15 11:06:56.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5714 delete e2e-test-crd-publish-openapi-7792-crds test-foo' Sep 15 11:06:56.998: INFO: stderr: "" Sep 15 11:06:56.998: INFO: stdout: "e2e-test-crd-publish-openapi-7792-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Sep 15 11:06:56.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5714 create -f -' Sep 15 11:06:57.262: INFO: rc: 1 Sep 15 11:06:57.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5714 apply -f -' Sep 15 11:06:57.545: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Sep 15 11:06:57.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5714 create -f -' Sep 15 11:06:57.803: INFO: rc: 1 Sep 15 11:06:57.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5714 apply -f -' Sep 15 11:06:58.067: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Sep 15 11:06:58.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7792-crds' Sep 15 11:06:58.385: INFO: stderr: "" Sep 15 11:06:58.386: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7792-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Sep 15 11:06:58.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7792-crds.metadata' Sep 15 11:06:58.729: INFO: stderr: "" Sep 15 11:06:58.729: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7792-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Sep 15 11:06:58.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7792-crds.spec' Sep 15 11:06:59.012: INFO: stderr: "" Sep 15 11:06:59.012: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7792-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Sep 15 11:06:59.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7792-crds.spec.bars' Sep 15 11:06:59.320: INFO: stderr: "" Sep 15 11:06:59.320: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7792-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Sep 15 11:06:59.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7792-crds.spec.bars2' Sep 15 11:06:59.576: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:07:01.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5714" for this suite. • [SLOW TEST:11.707 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":137,"skipped":2335,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:07:01.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-315e4a59-766e-492e-81aa-6aeae1c1e769 STEP: Creating a pod to test consume configMaps Sep 15 11:07:01.609: INFO: Waiting up to 5m0s for pod "pod-configmaps-a095cfd2-8038-4233-9672-3a68c8b65656" in namespace "configmap-3583" to be "Succeeded or Failed" Sep 15 11:07:01.642: INFO: Pod "pod-configmaps-a095cfd2-8038-4233-9672-3a68c8b65656": Phase="Pending", Reason="", readiness=false. Elapsed: 32.739917ms Sep 15 11:07:03.667: INFO: Pod "pod-configmaps-a095cfd2-8038-4233-9672-3a68c8b65656": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05801622s Sep 15 11:07:05.671: INFO: Pod "pod-configmaps-a095cfd2-8038-4233-9672-3a68c8b65656": Phase="Running", Reason="", readiness=true. Elapsed: 4.062229546s Sep 15 11:07:07.677: INFO: Pod "pod-configmaps-a095cfd2-8038-4233-9672-3a68c8b65656": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068273119s STEP: Saw pod success Sep 15 11:07:07.677: INFO: Pod "pod-configmaps-a095cfd2-8038-4233-9672-3a68c8b65656" satisfied condition "Succeeded or Failed" Sep 15 11:07:07.681: INFO: Trying to get logs from node kali-worker pod pod-configmaps-a095cfd2-8038-4233-9672-3a68c8b65656 container configmap-volume-test: STEP: delete the pod Sep 15 11:07:07.752: INFO: Waiting for pod pod-configmaps-a095cfd2-8038-4233-9672-3a68c8b65656 to disappear Sep 15 11:07:07.772: INFO: Pod pod-configmaps-a095cfd2-8038-4233-9672-3a68c8b65656 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:07:07.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3583" for this suite. • [SLOW TEST:6.224 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":138,"skipped":2343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:07:07.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0915 11:07:17.920437 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 15 11:08:19.941: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:08:19.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9916" for this suite. • [SLOW TEST:72.169 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":139,"skipped":2375,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:08:19.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:08:20.027: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:08:20.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8663" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":140,"skipped":2384,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:08:20.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0915 11:09:01.370679 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 15 11:10:03.388: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 15 11:10:03.388: INFO: Deleting pod "simpletest.rc-547tl" in namespace "gc-8750" Sep 15 11:10:03.408: INFO: Deleting pod "simpletest.rc-67fdm" in namespace "gc-8750" Sep 15 11:10:03.524: INFO: Deleting pod "simpletest.rc-gkbpz" in namespace "gc-8750" Sep 15 11:10:03.771: INFO: Deleting pod "simpletest.rc-kcxxk" in namespace "gc-8750" Sep 15 11:10:03.903: INFO: Deleting pod "simpletest.rc-kq6dw" in namespace "gc-8750" Sep 15 11:10:03.973: INFO: Deleting pod "simpletest.rc-m4fnw" in namespace "gc-8750" Sep 15 11:10:04.262: INFO: Deleting pod "simpletest.rc-nxh84" in namespace "gc-8750" Sep 15 11:10:04.363: INFO: Deleting pod "simpletest.rc-wjc2l" in namespace "gc-8750" Sep 15 11:10:04.598: INFO: Deleting pod "simpletest.rc-wtw5q" in namespace "gc-8750" Sep 15 11:10:04.723: INFO: Deleting pod "simpletest.rc-zxdkf" in namespace "gc-8750" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:10:04.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8750" for this suite. • [SLOW TEST:104.283 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":141,"skipped":2391,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:10:04.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-fb6cce56-b8a9-4a22-b7bc-3c375d7a5fcd STEP: Creating a pod to test consume configMaps Sep 15 11:10:05.515: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-347b95c0-0381-47bb-aa89-19c6b969c906" in namespace "projected-3314" to be "Succeeded or Failed" Sep 15 11:10:05.527: INFO: Pod "pod-projected-configmaps-347b95c0-0381-47bb-aa89-19c6b969c906": Phase="Pending", Reason="", readiness=false. Elapsed: 11.918646ms Sep 15 11:10:07.531: INFO: Pod "pod-projected-configmaps-347b95c0-0381-47bb-aa89-19c6b969c906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015691314s Sep 15 11:10:09.551: INFO: Pod "pod-projected-configmaps-347b95c0-0381-47bb-aa89-19c6b969c906": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036098348s Sep 15 11:10:11.569: INFO: Pod "pod-projected-configmaps-347b95c0-0381-47bb-aa89-19c6b969c906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054200486s STEP: Saw pod success Sep 15 11:10:11.569: INFO: Pod "pod-projected-configmaps-347b95c0-0381-47bb-aa89-19c6b969c906" satisfied condition "Succeeded or Failed" Sep 15 11:10:11.572: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-347b95c0-0381-47bb-aa89-19c6b969c906 container projected-configmap-volume-test: STEP: delete the pod Sep 15 11:10:11.670: INFO: Waiting for pod pod-projected-configmaps-347b95c0-0381-47bb-aa89-19c6b969c906 to disappear Sep 15 11:10:11.686: INFO: Pod pod-projected-configmaps-347b95c0-0381-47bb-aa89-19c6b969c906 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:10:11.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3314" for this suite. • [SLOW TEST:6.738 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":142,"skipped":2396,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:10:11.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:10:11.791: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f4e2a847-5bec-4df5-a482-a52bfd987cfe" in namespace "security-context-test-6888" to be "Succeeded or Failed" Sep 15 11:10:11.818: INFO: Pod "busybox-readonly-false-f4e2a847-5bec-4df5-a482-a52bfd987cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 26.314731ms Sep 15 11:10:13.824: INFO: Pod "busybox-readonly-false-f4e2a847-5bec-4df5-a482-a52bfd987cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033003775s Sep 15 11:10:15.841: INFO: Pod "busybox-readonly-false-f4e2a847-5bec-4df5-a482-a52bfd987cfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049138894s Sep 15 11:10:15.841: INFO: Pod "busybox-readonly-false-f4e2a847-5bec-4df5-a482-a52bfd987cfe" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:10:15.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6888" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":143,"skipped":2397,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:10:15.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-cd3f1ace-f3b6-4f3c-b9c8-100ee8194d0e STEP: Creating a pod to test consume configMaps Sep 15 11:10:16.106: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6ed8656f-d0a3-4b87-a824-574a00d8fee3" in namespace "projected-1245" to be "Succeeded or Failed" Sep 15 11:10:16.172: INFO: Pod "pod-projected-configmaps-6ed8656f-d0a3-4b87-a824-574a00d8fee3": Phase="Pending", Reason="", readiness=false. Elapsed: 65.60779ms Sep 15 11:10:18.196: INFO: Pod "pod-projected-configmaps-6ed8656f-d0a3-4b87-a824-574a00d8fee3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089930434s Sep 15 11:10:20.219: INFO: Pod "pod-projected-configmaps-6ed8656f-d0a3-4b87-a824-574a00d8fee3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11209477s STEP: Saw pod success Sep 15 11:10:20.219: INFO: Pod "pod-projected-configmaps-6ed8656f-d0a3-4b87-a824-574a00d8fee3" satisfied condition "Succeeded or Failed" Sep 15 11:10:20.221: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-6ed8656f-d0a3-4b87-a824-574a00d8fee3 container projected-configmap-volume-test: STEP: delete the pod Sep 15 11:10:20.250: INFO: Waiting for pod pod-projected-configmaps-6ed8656f-d0a3-4b87-a824-574a00d8fee3 to disappear Sep 15 11:10:20.264: INFO: Pod pod-projected-configmaps-6ed8656f-d0a3-4b87-a824-574a00d8fee3 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:10:20.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1245" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2418,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:10:20.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Sep 15 11:10:20.358: INFO: created test-podtemplate-1 Sep 15 11:10:20.366: INFO: created test-podtemplate-2 Sep 15 11:10:20.372: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Sep 15 11:10:20.404: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Sep 15 11:10:20.504: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:10:20.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9836" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":145,"skipped":2436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:10:20.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-8241/secret-test-bc164b50-ded0-4b92-8f86-5c0dd00613b1 STEP: Creating a pod to test consume secrets Sep 15 11:10:20.650: INFO: Waiting up to 5m0s for pod "pod-configmaps-c82e5159-d280-4724-b568-7ced02f780e6" in namespace "secrets-8241" to be "Succeeded or Failed" Sep 15 11:10:20.653: INFO: Pod "pod-configmaps-c82e5159-d280-4724-b568-7ced02f780e6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.473892ms Sep 15 11:10:22.696: INFO: Pod "pod-configmaps-c82e5159-d280-4724-b568-7ced02f780e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046669311s Sep 15 11:10:24.700: INFO: Pod "pod-configmaps-c82e5159-d280-4724-b568-7ced02f780e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050327849s STEP: Saw pod success Sep 15 11:10:24.700: INFO: Pod "pod-configmaps-c82e5159-d280-4724-b568-7ced02f780e6" satisfied condition "Succeeded or Failed" Sep 15 11:10:24.702: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-c82e5159-d280-4724-b568-7ced02f780e6 container env-test: STEP: delete the pod Sep 15 11:10:24.749: INFO: Waiting for pod pod-configmaps-c82e5159-d280-4724-b568-7ced02f780e6 to disappear Sep 15 11:10:24.755: INFO: Pod pod-configmaps-c82e5159-d280-4724-b568-7ced02f780e6 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:10:24.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8241" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":146,"skipped":2472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:10:24.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 15 11:10:24.909: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 15 11:10:29.961: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:10:30.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6518" for this suite. • [SLOW TEST:5.308 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":147,"skipped":2511,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:10:30.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 15 11:10:30.228: INFO: Waiting up to 1m0s for all nodes to be ready Sep 15 11:11:30.245: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 15 11:11:30.316: INFO: Created pod: pod0-sched-preemption-low-priority Sep 15 11:11:30.347: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:11:50.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9109" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:80.409 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":148,"skipped":2520,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:11:50.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:11:54.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9250" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":149,"skipped":2529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:11:54.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 15 11:11:59.957: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:11:59.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6143" for this suite. • [SLOW TEST:5.343 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":150,"skipped":2559,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:12:00.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:12:00.747: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:12:02.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765120, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765120, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765120, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765120, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:12:05.824: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:12:05.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9366" for this suite. STEP: Destroying namespace "webhook-9366-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.996 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":151,"skipped":2563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:12:05.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 11:12:06.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19d10f29-712e-474f-81a2-3dd03d0df39a" in namespace "downward-api-3352" to be "Succeeded or Failed" Sep 15 11:12:06.094: INFO: Pod "downwardapi-volume-19d10f29-712e-474f-81a2-3dd03d0df39a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.229749ms Sep 15 11:12:08.098: INFO: Pod "downwardapi-volume-19d10f29-712e-474f-81a2-3dd03d0df39a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013200611s Sep 15 11:12:10.103: INFO: Pod "downwardapi-volume-19d10f29-712e-474f-81a2-3dd03d0df39a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017938001s STEP: Saw pod success Sep 15 11:12:10.103: INFO: Pod "downwardapi-volume-19d10f29-712e-474f-81a2-3dd03d0df39a" satisfied condition "Succeeded or Failed" Sep 15 11:12:10.106: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-19d10f29-712e-474f-81a2-3dd03d0df39a container client-container: STEP: delete the pod Sep 15 11:12:10.138: INFO: Waiting for pod downwardapi-volume-19d10f29-712e-474f-81a2-3dd03d0df39a to disappear Sep 15 11:12:10.213: INFO: Pod downwardapi-volume-19d10f29-712e-474f-81a2-3dd03d0df39a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:12:10.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3352" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":152,"skipped":2588,"failed":0} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:12:10.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Sep 15 11:12:10.445: INFO: Created pod &Pod{ObjectMeta:{dns-9239 dns-9239 /api/v1/namespaces/dns-9239/pods/dns-9239 d6860fed-3220-4525-b0f1-8479057621f2 445500 0 2020-09-15 11:12:10 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-09-15 11:12:10 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q4hpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q4hpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q4hpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 15 11:12:10.456: INFO: The status of Pod dns-9239 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:12:12.460: INFO: The status of Pod dns-9239 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:12:14.461: INFO: The status of Pod dns-9239 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Sep 15 11:12:14.461: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9239 PodName:dns-9239 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:12:14.461: INFO: >>> kubeConfig: /root/.kube/config I0915 11:12:14.497549 7 log.go:181] (0xc0055e4c60) (0xc0073f0aa0) Create stream I0915 11:12:14.497592 7 log.go:181] (0xc0055e4c60) (0xc0073f0aa0) Stream added, broadcasting: 1 I0915 11:12:14.499356 7 log.go:181] (0xc0055e4c60) Reply frame received for 1 I0915 11:12:14.499393 7 log.go:181] (0xc0055e4c60) (0xc004dfa500) Create stream I0915 11:12:14.499410 7 log.go:181] (0xc0055e4c60) (0xc004dfa500) Stream added, broadcasting: 3 I0915 11:12:14.500081 7 log.go:181] (0xc0055e4c60) Reply frame received for 3 I0915 11:12:14.500122 7 log.go:181] (0xc0055e4c60) (0xc005a50aa0) Create stream I0915 11:12:14.500220 7 log.go:181] (0xc0055e4c60) (0xc005a50aa0) Stream added, broadcasting: 5 I0915 11:12:14.501209 7 log.go:181] (0xc0055e4c60) Reply frame received for 5 I0915 11:12:14.583234 7 log.go:181] (0xc0055e4c60) Data frame received for 3 I0915 11:12:14.583267 7 log.go:181] (0xc004dfa500) (3) Data frame handling I0915 11:12:14.583286 7 log.go:181] (0xc004dfa500) (3) Data frame sent I0915 11:12:14.583968 7 log.go:181] (0xc0055e4c60) Data frame received for 5 I0915 11:12:14.583992 7 log.go:181] (0xc005a50aa0) (5) Data frame handling I0915 11:12:14.584127 7 log.go:181] (0xc0055e4c60) Data frame received for 3 I0915 11:12:14.584279 7 log.go:181] (0xc004dfa500) (3) Data frame handling I0915 11:12:14.586223 7 log.go:181] (0xc0055e4c60) Data frame received for 1 I0915 11:12:14.586266 7 log.go:181] (0xc0073f0aa0) (1) Data frame handling I0915 11:12:14.586324 7 log.go:181] (0xc0073f0aa0) (1) Data frame sent I0915 11:12:14.586358 7 log.go:181] (0xc0055e4c60) (0xc0073f0aa0) Stream removed, broadcasting: 1 I0915 11:12:14.586421 7 log.go:181] (0xc0055e4c60) Go away received I0915 11:12:14.586481 7 log.go:181] (0xc0055e4c60) (0xc0073f0aa0) Stream removed, broadcasting: 1 I0915 11:12:14.586518 7 log.go:181] (0xc0055e4c60) (0xc004dfa500) Stream removed, broadcasting: 3 I0915 11:12:14.586557 7 log.go:181] (0xc0055e4c60) (0xc005a50aa0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Sep 15 11:12:14.586: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9239 PodName:dns-9239 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:12:14.586: INFO: >>> kubeConfig: /root/.kube/config I0915 11:12:14.621536 7 log.go:181] (0xc0055e4f20) (0xc0073f0f00) Create stream I0915 11:12:14.621567 7 log.go:181] (0xc0055e4f20) (0xc0073f0f00) Stream added, broadcasting: 1 I0915 11:12:14.623896 7 log.go:181] (0xc0055e4f20) Reply frame received for 1 I0915 11:12:14.623959 7 log.go:181] (0xc0055e4f20) (0xc002878fa0) Create stream I0915 11:12:14.623985 7 log.go:181] (0xc0055e4f20) (0xc002878fa0) Stream added, broadcasting: 3 I0915 11:12:14.625026 7 log.go:181] (0xc0055e4f20) Reply frame received for 3 I0915 11:12:14.625073 7 log.go:181] (0xc0055e4f20) (0xc005a50be0) Create stream I0915 11:12:14.625086 7 log.go:181] (0xc0055e4f20) (0xc005a50be0) Stream added, broadcasting: 5 I0915 11:12:14.625836 7 log.go:181] (0xc0055e4f20) Reply frame received for 5 I0915 11:12:14.695170 7 log.go:181] (0xc0055e4f20) Data frame received for 3 I0915 11:12:14.695202 7 log.go:181] (0xc002878fa0) (3) Data frame handling I0915 11:12:14.695223 7 log.go:181] (0xc002878fa0) (3) Data frame sent I0915 11:12:14.696004 7 log.go:181] (0xc0055e4f20) Data frame received for 3 I0915 11:12:14.696054 7 log.go:181] (0xc002878fa0) (3) Data frame handling I0915 11:12:14.696297 7 log.go:181] (0xc0055e4f20) Data frame received for 5 I0915 11:12:14.696344 7 log.go:181] (0xc005a50be0) (5) Data frame handling I0915 11:12:14.697559 7 log.go:181] (0xc0055e4f20) Data frame received for 1 I0915 11:12:14.697623 7 log.go:181] (0xc0073f0f00) (1) Data frame handling I0915 11:12:14.697688 7 log.go:181] (0xc0073f0f00) (1) Data frame sent I0915 11:12:14.697741 7 log.go:181] (0xc0055e4f20) (0xc0073f0f00) Stream removed, broadcasting: 1 I0915 11:12:14.697803 7 log.go:181] (0xc0055e4f20) Go away received I0915 11:12:14.697882 7 log.go:181] (0xc0055e4f20) (0xc0073f0f00) Stream removed, broadcasting: 1 I0915 11:12:14.697938 7 log.go:181] (0xc0055e4f20) (0xc002878fa0) Stream removed, broadcasting: 3 I0915 11:12:14.697965 7 log.go:181] (0xc0055e4f20) (0xc005a50be0) Stream removed, broadcasting: 5 Sep 15 11:12:14.698: INFO: Deleting pod dns-9239... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:12:14.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9239" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":153,"skipped":2591,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:12:14.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 15 11:12:18.917: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:12:18.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7960" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":154,"skipped":2602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:12:18.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:12:19.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-295" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":155,"skipped":2636,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:12:19.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 15 11:12:19.111: INFO: PodSpec: initContainers in spec.initContainers Sep 15 11:13:10.249: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6633c4ad-00d7-4c34-bcc8-44fc6cd44775", GenerateName:"", Namespace:"init-container-9558", SelfLink:"/api/v1/namespaces/init-container-9558/pods/pod-init-6633c4ad-00d7-4c34-bcc8-44fc6cd44775", UID:"75fb2bff-32a5-43a0-a68f-6f7b8e731692", ResourceVersion:"445790", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735765139, loc:(*time.Location)(0x7702840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"111801902"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030aa080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030aa0a0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030aa0c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030aa0e0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v4bb2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc008db2000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4bb2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4bb2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4bb2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0055aa098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003b18000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055aa120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055aa140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0055aa148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0055aa14c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002552020), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765139, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765139, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765139, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765139, loc:(*time.Location)(0x7702840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.12", PodIP:"10.244.2.140", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.140"}}, StartTime:(*v1.Time)(0xc0030aa100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003b180e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003b18150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://41670ee6b9620401658d08c08786ce1bf52a66aa9149378411180c98be2840a1", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030aa140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030aa120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0055aa1cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:13:10.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9558" for this suite. • [SLOW TEST:51.234 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":156,"skipped":2645,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:13:10.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 15 11:13:10.376: INFO: Waiting up to 5m0s for pod "downward-api-7544a41e-28f3-460a-8f44-a960e6fef236" in namespace "downward-api-6161" to be "Succeeded or Failed" Sep 15 11:13:10.385: INFO: Pod "downward-api-7544a41e-28f3-460a-8f44-a960e6fef236": Phase="Pending", Reason="", readiness=false. Elapsed: 9.421316ms Sep 15 11:13:12.390: INFO: Pod "downward-api-7544a41e-28f3-460a-8f44-a960e6fef236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01389172s Sep 15 11:13:14.395: INFO: Pod "downward-api-7544a41e-28f3-460a-8f44-a960e6fef236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018994294s STEP: Saw pod success Sep 15 11:13:14.395: INFO: Pod "downward-api-7544a41e-28f3-460a-8f44-a960e6fef236" satisfied condition "Succeeded or Failed" Sep 15 11:13:14.398: INFO: Trying to get logs from node kali-worker pod downward-api-7544a41e-28f3-460a-8f44-a960e6fef236 container dapi-container: STEP: delete the pod Sep 15 11:13:14.462: INFO: Waiting for pod downward-api-7544a41e-28f3-460a-8f44-a960e6fef236 to disappear Sep 15 11:13:14.467: INFO: Pod downward-api-7544a41e-28f3-460a-8f44-a960e6fef236 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:13:14.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6161" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":157,"skipped":2667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:13:14.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:13:15.177: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:13:17.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765195, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765195, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765195, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735765195, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:13:20.341: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:13:20.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9789-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:13:21.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9053" for this suite. STEP: Destroying namespace "webhook-9053-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.104 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":158,"skipped":2691,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:13:21.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Sep 15 11:13:21.752: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:13:21.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5856" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":159,"skipped":2709,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:13:21.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1828 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 15 11:13:21.986: INFO: Found 0 stateful pods, waiting for 3 Sep 15 11:13:31.999: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:13:31.999: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:13:32.000: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 15 11:13:41.994: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:13:41.994: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:13:41.994: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:13:42.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1828 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 11:13:42.259: INFO: stderr: "I0915 11:13:42.137223 2093 log.go:181] (0xc00003a420) (0xc0003cb4a0) Create stream\nI0915 11:13:42.137281 2093 log.go:181] (0xc00003a420) (0xc0003cb4a0) Stream added, broadcasting: 1\nI0915 11:13:42.139199 2093 log.go:181] (0xc00003a420) Reply frame received for 1\nI0915 11:13:42.139245 2093 log.go:181] (0xc00003a420) (0xc0003cbb80) Create stream\nI0915 11:13:42.139256 2093 log.go:181] (0xc00003a420) (0xc0003cbb80) Stream added, broadcasting: 3\nI0915 11:13:42.140078 2093 log.go:181] (0xc00003a420) Reply frame received for 3\nI0915 11:13:42.140116 2093 log.go:181] (0xc00003a420) (0xc000526140) Create stream\nI0915 11:13:42.140126 2093 log.go:181] (0xc00003a420) (0xc000526140) Stream added, broadcasting: 5\nI0915 11:13:42.140946 2093 log.go:181] (0xc00003a420) Reply frame received for 5\nI0915 11:13:42.226230 2093 log.go:181] (0xc00003a420) Data frame received for 5\nI0915 11:13:42.226264 2093 log.go:181] (0xc000526140) (5) Data frame handling\nI0915 11:13:42.226294 2093 log.go:181] (0xc000526140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 11:13:42.250882 2093 log.go:181] (0xc00003a420) Data frame received for 3\nI0915 11:13:42.250926 2093 log.go:181] (0xc0003cbb80) (3) Data frame handling\nI0915 11:13:42.250965 2093 log.go:181] (0xc0003cbb80) (3) Data frame sent\nI0915 11:13:42.251187 2093 log.go:181] (0xc00003a420) Data frame received for 3\nI0915 11:13:42.251226 2093 log.go:181] (0xc0003cbb80) (3) Data frame handling\nI0915 11:13:42.251260 2093 log.go:181] (0xc00003a420) Data frame received for 5\nI0915 11:13:42.251286 2093 log.go:181] (0xc000526140) (5) Data frame handling\nI0915 11:13:42.253172 2093 log.go:181] (0xc00003a420) Data frame received for 1\nI0915 11:13:42.253214 2093 log.go:181] (0xc0003cb4a0) (1) Data frame handling\nI0915 11:13:42.253235 2093 log.go:181] (0xc0003cb4a0) (1) Data frame sent\nI0915 11:13:42.253257 2093 log.go:181] (0xc00003a420) (0xc0003cb4a0) Stream removed, broadcasting: 1\nI0915 11:13:42.253335 2093 log.go:181] (0xc00003a420) Go away received\nI0915 11:13:42.253863 2093 log.go:181] (0xc00003a420) (0xc0003cb4a0) Stream removed, broadcasting: 1\nI0915 11:13:42.253903 2093 log.go:181] (0xc00003a420) (0xc0003cbb80) Stream removed, broadcasting: 3\nI0915 11:13:42.253919 2093 log.go:181] (0xc00003a420) (0xc000526140) Stream removed, broadcasting: 5\n" Sep 15 11:13:42.259: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 11:13:42.259: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 15 11:13:52.292: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 15 11:14:02.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1828 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 15 11:14:02.602: INFO: stderr: "I0915 11:14:02.496474 2112 log.go:181] (0xc000e03290) (0xc0006fa640) Create stream\nI0915 11:14:02.496521 2112 log.go:181] (0xc000e03290) (0xc0006fa640) Stream added, broadcasting: 1\nI0915 11:14:02.501717 2112 log.go:181] (0xc000e03290) Reply frame received for 1\nI0915 11:14:02.501753 2112 log.go:181] (0xc000e03290) (0xc00013e000) Create stream\nI0915 11:14:02.501762 2112 log.go:181] (0xc000e03290) (0xc00013e000) Stream added, broadcasting: 3\nI0915 11:14:02.502684 2112 log.go:181] (0xc000e03290) Reply frame received for 3\nI0915 11:14:02.502718 2112 log.go:181] (0xc000e03290) (0xc0006fa000) Create stream\nI0915 11:14:02.502729 2112 log.go:181] (0xc000e03290) (0xc0006fa000) Stream added, broadcasting: 5\nI0915 11:14:02.503625 2112 log.go:181] (0xc000e03290) Reply frame received for 5\nI0915 11:14:02.591921 2112 log.go:181] (0xc000e03290) Data frame received for 5\nI0915 11:14:02.591961 2112 log.go:181] (0xc0006fa000) (5) Data frame handling\nI0915 11:14:02.591976 2112 log.go:181] (0xc0006fa000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0915 11:14:02.591997 2112 log.go:181] (0xc000e03290) Data frame received for 3\nI0915 11:14:02.592008 2112 log.go:181] (0xc00013e000) (3) Data frame handling\nI0915 11:14:02.592020 2112 log.go:181] (0xc00013e000) (3) Data frame sent\nI0915 11:14:02.592030 2112 log.go:181] (0xc000e03290) Data frame received for 3\nI0915 11:14:02.592040 2112 log.go:181] (0xc00013e000) (3) Data frame handling\nI0915 11:14:02.596227 2112 log.go:181] (0xc000e03290) Data frame received for 5\nI0915 11:14:02.596249 2112 log.go:181] (0xc0006fa000) (5) Data frame handling\nI0915 11:14:02.598470 2112 log.go:181] (0xc000e03290) Data frame received for 1\nI0915 11:14:02.598513 2112 log.go:181] (0xc0006fa640) (1) Data frame handling\nI0915 11:14:02.598552 2112 log.go:181] (0xc0006fa640) (1) Data frame sent\nI0915 11:14:02.598588 2112 log.go:181] (0xc000e03290) (0xc0006fa640) Stream removed, broadcasting: 1\nI0915 11:14:02.598625 2112 log.go:181] (0xc000e03290) Go away received\nI0915 11:14:02.599073 2112 log.go:181] (0xc000e03290) (0xc0006fa640) Stream removed, broadcasting: 1\nI0915 11:14:02.599103 2112 log.go:181] (0xc000e03290) (0xc00013e000) Stream removed, broadcasting: 3\nI0915 11:14:02.599116 2112 log.go:181] (0xc000e03290) (0xc0006fa000) Stream removed, broadcasting: 5\n" Sep 15 11:14:02.603: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 15 11:14:02.603: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 15 11:14:12.622: INFO: Waiting for StatefulSet statefulset-1828/ss2 to complete update Sep 15 11:14:12.622: INFO: Waiting for Pod statefulset-1828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 15 11:14:12.622: INFO: Waiting for Pod statefulset-1828/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 15 11:14:12.622: INFO: Waiting for Pod statefulset-1828/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 15 11:14:22.680: INFO: Waiting for StatefulSet statefulset-1828/ss2 to complete update Sep 15 11:14:22.681: INFO: Waiting for Pod statefulset-1828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 15 11:14:22.681: INFO: Waiting for Pod statefulset-1828/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 15 11:14:32.629: INFO: Waiting for StatefulSet statefulset-1828/ss2 to complete update Sep 15 11:14:32.629: INFO: Waiting for Pod statefulset-1828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 15 11:14:42.631: INFO: Waiting for StatefulSet statefulset-1828/ss2 to complete update Sep 15 11:14:42.631: INFO: Waiting for Pod statefulset-1828/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Sep 15 11:14:52.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1828 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 15 11:14:52.887: INFO: stderr: "I0915 11:14:52.763460 2130 log.go:181] (0xc0000fe000) (0xc000c701e0) Create stream\nI0915 11:14:52.763545 2130 log.go:181] (0xc0000fe000) (0xc000c701e0) Stream added, broadcasting: 1\nI0915 11:14:52.765795 2130 log.go:181] (0xc0000fe000) Reply frame received for 1\nI0915 11:14:52.765864 2130 log.go:181] (0xc0000fe000) (0xc000c70280) Create stream\nI0915 11:14:52.765889 2130 log.go:181] (0xc0000fe000) (0xc000c70280) Stream added, broadcasting: 3\nI0915 11:14:52.766863 2130 log.go:181] (0xc0000fe000) Reply frame received for 3\nI0915 11:14:52.766884 2130 log.go:181] (0xc0000fe000) (0xc0006f4000) Create stream\nI0915 11:14:52.766891 2130 log.go:181] (0xc0000fe000) (0xc0006f4000) Stream added, broadcasting: 5\nI0915 11:14:52.767822 2130 log.go:181] (0xc0000fe000) Reply frame received for 5\nI0915 11:14:52.847750 2130 log.go:181] (0xc0000fe000) Data frame received for 5\nI0915 11:14:52.847773 2130 log.go:181] (0xc0006f4000) (5) Data frame handling\nI0915 11:14:52.847786 2130 log.go:181] (0xc0006f4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0915 11:14:52.880399 2130 log.go:181] (0xc0000fe000) Data frame received for 3\nI0915 11:14:52.880434 2130 log.go:181] (0xc000c70280) (3) Data frame handling\nI0915 11:14:52.880482 2130 log.go:181] (0xc000c70280) (3) Data frame sent\nI0915 11:14:52.880572 2130 log.go:181] (0xc0000fe000) Data frame received for 5\nI0915 11:14:52.880593 2130 log.go:181] (0xc0006f4000) (5) Data frame handling\nI0915 11:14:52.880624 2130 log.go:181] (0xc0000fe000) Data frame received for 3\nI0915 11:14:52.880629 2130 log.go:181] (0xc000c70280) (3) Data frame handling\nI0915 11:14:52.882699 2130 log.go:181] (0xc0000fe000) Data frame received for 1\nI0915 11:14:52.882809 2130 log.go:181] (0xc000c701e0) (1) Data frame handling\nI0915 11:14:52.882834 2130 log.go:181] (0xc000c701e0) (1) Data frame sent\nI0915 11:14:52.882856 2130 log.go:181] (0xc0000fe000) (0xc000c701e0) Stream removed, broadcasting: 1\nI0915 11:14:52.882878 2130 log.go:181] (0xc0000fe000) Go away received\nI0915 11:14:52.883438 2130 log.go:181] (0xc0000fe000) (0xc000c701e0) Stream removed, broadcasting: 1\nI0915 11:14:52.883469 2130 log.go:181] (0xc0000fe000) (0xc000c70280) Stream removed, broadcasting: 3\nI0915 11:14:52.883486 2130 log.go:181] (0xc0000fe000) (0xc0006f4000) Stream removed, broadcasting: 5\n" Sep 15 11:14:52.887: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 15 11:14:52.887: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 15 11:15:02.921: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 15 11:15:12.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1828 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 15 11:15:13.220: INFO: stderr: "I0915 11:15:13.118511 2148 log.go:181] (0xc00003a0b0) (0xc000922000) Create stream\nI0915 11:15:13.118576 2148 log.go:181] (0xc00003a0b0) (0xc000922000) Stream added, broadcasting: 1\nI0915 11:15:13.122929 2148 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0915 11:15:13.123005 2148 log.go:181] (0xc00003a0b0) (0xc000c3c1e0) Create stream\nI0915 11:15:13.123027 2148 log.go:181] (0xc00003a0b0) (0xc000c3c1e0) Stream added, broadcasting: 3\nI0915 11:15:13.124023 2148 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0915 11:15:13.124067 2148 log.go:181] (0xc00003a0b0) (0xc0008181e0) Create stream\nI0915 11:15:13.124086 2148 log.go:181] (0xc00003a0b0) (0xc0008181e0) Stream added, broadcasting: 5\nI0915 11:15:13.125247 2148 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0915 11:15:13.213558 2148 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0915 11:15:13.213612 2148 log.go:181] (0xc000c3c1e0) (3) Data frame handling\nI0915 11:15:13.213633 2148 log.go:181] (0xc000c3c1e0) (3) Data frame sent\nI0915 11:15:13.213653 2148 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0915 11:15:13.213667 2148 log.go:181] (0xc000c3c1e0) (3) Data frame handling\nI0915 11:15:13.213738 2148 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0915 11:15:13.213760 2148 log.go:181] (0xc0008181e0) (5) Data frame handling\nI0915 11:15:13.213794 2148 log.go:181] (0xc0008181e0) (5) Data frame sent\nI0915 11:15:13.213824 2148 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0915 11:15:13.213843 2148 log.go:181] (0xc0008181e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0915 11:15:13.215117 2148 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0915 11:15:13.215152 2148 log.go:181] (0xc000922000) (1) Data frame handling\nI0915 11:15:13.215188 2148 log.go:181] (0xc000922000) (1) Data frame sent\nI0915 11:15:13.215220 2148 log.go:181] (0xc00003a0b0) (0xc000922000) Stream removed, broadcasting: 1\nI0915 11:15:13.215468 2148 log.go:181] (0xc00003a0b0) Go away received\nI0915 11:15:13.216000 2148 log.go:181] (0xc00003a0b0) (0xc000922000) Stream removed, broadcasting: 1\nI0915 11:15:13.216031 2148 log.go:181] (0xc00003a0b0) (0xc000c3c1e0) Stream removed, broadcasting: 3\nI0915 11:15:13.216051 2148 log.go:181] (0xc00003a0b0) (0xc0008181e0) Stream removed, broadcasting: 5\n" Sep 15 11:15:13.220: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 15 11:15:13.220: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 15 11:15:33.241: INFO: Deleting all statefulset in ns statefulset-1828 Sep 15 11:15:33.243: INFO: Scaling statefulset ss2 to 0 Sep 15 11:15:53.263: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 11:15:53.266: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:15:53.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1828" for this suite. • [SLOW TEST:151.507 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":160,"skipped":2728,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:15:53.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:15:53.391: INFO: Creating ReplicaSet my-hostname-basic-bae678ad-2d96-474f-90c0-991383dc4aeb Sep 15 11:15:53.435: INFO: Pod name my-hostname-basic-bae678ad-2d96-474f-90c0-991383dc4aeb: Found 0 pods out of 1 Sep 15 11:15:58.456: INFO: Pod name my-hostname-basic-bae678ad-2d96-474f-90c0-991383dc4aeb: Found 1 pods out of 1 Sep 15 11:15:58.456: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-bae678ad-2d96-474f-90c0-991383dc4aeb" is running Sep 15 11:15:58.507: INFO: Pod "my-hostname-basic-bae678ad-2d96-474f-90c0-991383dc4aeb-k54kb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-15 11:15:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-15 11:15:56 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-15 11:15:56 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-15 11:15:53 +0000 UTC Reason: Message:}]) Sep 15 11:15:58.508: INFO: Trying to dial the pod Sep 15 11:16:03.521: INFO: Controller my-hostname-basic-bae678ad-2d96-474f-90c0-991383dc4aeb: Got expected result from replica 1 [my-hostname-basic-bae678ad-2d96-474f-90c0-991383dc4aeb-k54kb]: "my-hostname-basic-bae678ad-2d96-474f-90c0-991383dc4aeb-k54kb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:16:03.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2029" for this suite. • [SLOW TEST:10.239 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":161,"skipped":2749,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:16:03.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Sep 15 11:16:03.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f -' Sep 15 11:16:03.915: INFO: stderr: "" Sep 15 11:16:03.915: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Sep 15 11:16:03.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config diff -f -' Sep 15 11:16:04.376: INFO: rc: 1 Sep 15 11:16:04.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete -f -' Sep 15 11:16:04.472: INFO: stderr: "" Sep 15 11:16:04.472: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:16:04.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4270" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":162,"skipped":2763,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:16:04.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:16:04.768: INFO: Create a RollingUpdate DaemonSet Sep 15 11:16:04.772: INFO: Check that daemon pods launch on every node of the cluster Sep 15 11:16:04.787: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:16:04.806: INFO: Number of nodes with available pods: 0 Sep 15 11:16:04.806: INFO: Node kali-worker is running more than one daemon pod Sep 15 11:16:05.812: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:16:05.816: INFO: Number of nodes with available pods: 0 Sep 15 11:16:05.816: INFO: Node kali-worker is running more than one daemon pod Sep 15 11:16:06.902: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:16:06.948: INFO: Number of nodes with available pods: 0 Sep 15 11:16:06.948: INFO: Node kali-worker is running more than one daemon pod Sep 15 11:16:07.812: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:16:07.815: INFO: Number of nodes with available pods: 0 Sep 15 11:16:07.815: INFO: Node kali-worker is running more than one daemon pod Sep 15 11:16:08.810: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:16:08.863: INFO: Number of nodes with available pods: 2 Sep 15 11:16:08.863: INFO: Number of running nodes: 2, number of available pods: 2 Sep 15 11:16:08.863: INFO: Update the DaemonSet to trigger a rollout Sep 15 11:16:08.871: INFO: Updating DaemonSet daemon-set Sep 15 11:16:23.933: INFO: Roll back the DaemonSet before rollout is complete Sep 15 11:16:23.941: INFO: Updating DaemonSet daemon-set Sep 15 11:16:23.941: INFO: Make sure DaemonSet rollback is complete Sep 15 11:16:23.948: INFO: Wrong image for pod: daemon-set-q25r8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 15 11:16:23.948: INFO: Pod daemon-set-q25r8 is not available Sep 15 11:16:23.968: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:16:24.974: INFO: Wrong image for pod: daemon-set-q25r8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 15 11:16:24.974: INFO: Pod daemon-set-q25r8 is not available Sep 15 11:16:24.977: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:16:26.007: INFO: Pod daemon-set-zwlwb is not available Sep 15 11:16:26.011: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8020, will wait for the garbage collector to delete the pods Sep 15 11:16:26.078: INFO: Deleting DaemonSet.extensions daemon-set took: 7.009008ms Sep 15 11:16:26.578: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.282648ms Sep 15 11:16:33.282: INFO: Number of nodes with available pods: 0 Sep 15 11:16:33.282: INFO: Number of running nodes: 0, number of available pods: 0 Sep 15 11:16:33.285: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8020/daemonsets","resourceVersion":"447056"},"items":null} Sep 15 11:16:33.287: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8020/pods","resourceVersion":"447056"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:16:33.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8020" for this suite. • [SLOW TEST:28.785 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":163,"skipped":2773,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:16:33.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6085.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6085.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6085.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6085.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 15 11:16:41.425: INFO: DNS probes using dns-test-14880a7f-0c06-45ef-a031-f300d22ff0bb succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6085.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6085.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6085.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6085.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 15 11:16:49.613: INFO: File jessie_udp@dns-test-service-3.dns-6085.svc.cluster.local from pod dns-6085/dns-test-d0cc60c4-fba6-4ef8-b6c1-94e725d0b1ce contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 15 11:16:49.613: INFO: Lookups using dns-6085/dns-test-d0cc60c4-fba6-4ef8-b6c1-94e725d0b1ce failed for: [jessie_udp@dns-test-service-3.dns-6085.svc.cluster.local] Sep 15 11:16:54.622: INFO: DNS probes using dns-test-d0cc60c4-fba6-4ef8-b6c1-94e725d0b1ce succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6085.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6085.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6085.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6085.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 15 11:17:01.212: INFO: DNS probes using dns-test-234b1479-3a0a-4f5f-a77e-ea08a81b5c83 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:17:01.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6085" for this suite. • [SLOW TEST:27.997 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":164,"skipped":2776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:17:01.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Sep 15 11:17:05.795: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3499 PodName:var-expansion-61268b71-90dc-4ba9-be85-774b757a1b4c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:17:05.795: INFO: >>> kubeConfig: /root/.kube/config I0915 11:17:05.831828 7 log.go:181] (0xc0073fadc0) (0xc000ddbd60) Create stream I0915 11:17:05.831862 7 log.go:181] (0xc0073fadc0) (0xc000ddbd60) Stream added, broadcasting: 1 I0915 11:17:05.834471 7 log.go:181] (0xc0073fadc0) Reply frame received for 1 I0915 11:17:05.834520 7 log.go:181] (0xc0073fadc0) (0xc000ddbe00) Create stream I0915 11:17:05.834533 7 log.go:181] (0xc0073fadc0) (0xc000ddbe00) Stream added, broadcasting: 3 I0915 11:17:05.835572 7 log.go:181] (0xc0073fadc0) Reply frame received for 3 I0915 11:17:05.835631 7 log.go:181] (0xc0073fadc0) (0xc000ddbea0) Create stream I0915 11:17:05.835649 7 log.go:181] (0xc0073fadc0) (0xc000ddbea0) Stream added, broadcasting: 5 I0915 11:17:05.836640 7 log.go:181] (0xc0073fadc0) Reply frame received for 5 I0915 11:17:05.927523 7 log.go:181] (0xc0073fadc0) Data frame received for 3 I0915 11:17:05.927560 7 log.go:181] (0xc000ddbe00) (3) Data frame handling I0915 11:17:05.927582 7 log.go:181] (0xc0073fadc0) Data frame received for 5 I0915 11:17:05.927595 7 log.go:181] (0xc000ddbea0) (5) Data frame handling I0915 11:17:05.929398 7 log.go:181] (0xc0073fadc0) Data frame received for 1 I0915 11:17:05.929428 7 log.go:181] (0xc000ddbd60) (1) Data frame handling I0915 11:17:05.929458 7 log.go:181] (0xc000ddbd60) (1) Data frame sent I0915 11:17:05.929476 7 log.go:181] (0xc0073fadc0) (0xc000ddbd60) Stream removed, broadcasting: 1 I0915 11:17:05.929564 7 log.go:181] (0xc0073fadc0) Go away received I0915 11:17:05.929611 7 log.go:181] (0xc0073fadc0) (0xc000ddbd60) Stream removed, broadcasting: 1 I0915 11:17:05.929636 7 log.go:181] (0xc0073fadc0) (0xc000ddbe00) Stream removed, broadcasting: 3 I0915 11:17:05.929660 7 log.go:181] (0xc0073fadc0) (0xc000ddbea0) Stream removed, broadcasting: 5 STEP: test for file in mounted path Sep 15 11:17:05.941: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3499 PodName:var-expansion-61268b71-90dc-4ba9-be85-774b757a1b4c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:17:05.942: INFO: >>> kubeConfig: /root/.kube/config I0915 11:17:05.979715 7 log.go:181] (0xc002f9a580) (0xc003dedf40) Create stream I0915 11:17:05.979756 7 log.go:181] (0xc002f9a580) (0xc003dedf40) Stream added, broadcasting: 1 I0915 11:17:05.982130 7 log.go:181] (0xc002f9a580) Reply frame received for 1 I0915 11:17:05.982182 7 log.go:181] (0xc002f9a580) (0xc003664960) Create stream I0915 11:17:05.982195 7 log.go:181] (0xc002f9a580) (0xc003664960) Stream added, broadcasting: 3 I0915 11:17:05.983106 7 log.go:181] (0xc002f9a580) Reply frame received for 3 I0915 11:17:05.983155 7 log.go:181] (0xc002f9a580) (0xc003f97ea0) Create stream I0915 11:17:05.983186 7 log.go:181] (0xc002f9a580) (0xc003f97ea0) Stream added, broadcasting: 5 I0915 11:17:05.984374 7 log.go:181] (0xc002f9a580) Reply frame received for 5 I0915 11:17:06.038209 7 log.go:181] (0xc002f9a580) Data frame received for 3 I0915 11:17:06.038247 7 log.go:181] (0xc003664960) (3) Data frame handling I0915 11:17:06.038287 7 log.go:181] (0xc002f9a580) Data frame received for 5 I0915 11:17:06.038330 7 log.go:181] (0xc003f97ea0) (5) Data frame handling I0915 11:17:06.039723 7 log.go:181] (0xc002f9a580) Data frame received for 1 I0915 11:17:06.039749 7 log.go:181] (0xc003dedf40) (1) Data frame handling I0915 11:17:06.039766 7 log.go:181] (0xc003dedf40) (1) Data frame sent I0915 11:17:06.039779 7 log.go:181] (0xc002f9a580) (0xc003dedf40) Stream removed, broadcasting: 1 I0915 11:17:06.039793 7 log.go:181] (0xc002f9a580) Go away received I0915 11:17:06.039861 7 log.go:181] (0xc002f9a580) (0xc003dedf40) Stream removed, broadcasting: 1 I0915 11:17:06.039878 7 log.go:181] (0xc002f9a580) (0xc003664960) Stream removed, broadcasting: 3 I0915 11:17:06.039885 7 log.go:181] (0xc002f9a580) (0xc003f97ea0) Stream removed, broadcasting: 5 STEP: updating the annotation value Sep 15 11:17:06.555: INFO: Successfully updated pod "var-expansion-61268b71-90dc-4ba9-be85-774b757a1b4c" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Sep 15 11:17:06.573: INFO: Deleting pod "var-expansion-61268b71-90dc-4ba9-be85-774b757a1b4c" in namespace "var-expansion-3499" Sep 15 11:17:06.606: INFO: Wait up to 5m0s for pod "var-expansion-61268b71-90dc-4ba9-be85-774b757a1b4c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:17:42.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3499" for this suite. • [SLOW TEST:41.350 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":165,"skipped":2860,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:17:42.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 15 11:17:47.313: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e918ffd2-b650-4231-a364-e89e09f2035c" Sep 15 11:17:47.313: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e918ffd2-b650-4231-a364-e89e09f2035c" in namespace "pods-3716" to be "terminated due to deadline exceeded" Sep 15 11:17:47.362: INFO: Pod "pod-update-activedeadlineseconds-e918ffd2-b650-4231-a364-e89e09f2035c": Phase="Running", Reason="", readiness=true. Elapsed: 48.915556ms Sep 15 11:17:49.367: INFO: Pod "pod-update-activedeadlineseconds-e918ffd2-b650-4231-a364-e89e09f2035c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.053898662s Sep 15 11:17:49.367: INFO: Pod "pod-update-activedeadlineseconds-e918ffd2-b650-4231-a364-e89e09f2035c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:17:49.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3716" for this suite. • [SLOW TEST:6.724 seconds] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":166,"skipped":2876,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:17:49.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-d1b0b462-2b1e-46db-8fee-1c093bb0fb8b STEP: Creating a pod to test consume secrets Sep 15 11:17:49.461: INFO: Waiting up to 5m0s for pod "pod-secrets-df87a615-10de-42c6-9b71-88b372adf42b" in namespace "secrets-8848" to be "Succeeded or Failed" Sep 15 11:17:49.480: INFO: Pod "pod-secrets-df87a615-10de-42c6-9b71-88b372adf42b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.806956ms Sep 15 11:17:51.536: INFO: Pod "pod-secrets-df87a615-10de-42c6-9b71-88b372adf42b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074036352s Sep 15 11:17:53.540: INFO: Pod "pod-secrets-df87a615-10de-42c6-9b71-88b372adf42b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078074163s STEP: Saw pod success Sep 15 11:17:53.540: INFO: Pod "pod-secrets-df87a615-10de-42c6-9b71-88b372adf42b" satisfied condition "Succeeded or Failed" Sep 15 11:17:53.543: INFO: Trying to get logs from node kali-worker pod pod-secrets-df87a615-10de-42c6-9b71-88b372adf42b container secret-volume-test: STEP: delete the pod Sep 15 11:17:53.617: INFO: Waiting for pod pod-secrets-df87a615-10de-42c6-9b71-88b372adf42b to disappear Sep 15 11:17:53.633: INFO: Pod pod-secrets-df87a615-10de-42c6-9b71-88b372adf42b no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:17:53.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8848" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2878,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:17:53.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:17:53.853: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"22425e60-29ff-493f-bed7-7e2053fd4275", Controller:(*bool)(0xc00392fda2), BlockOwnerDeletion:(*bool)(0xc00392fda3)}} Sep 15 11:17:53.861: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ffc8d920-8806-4294-8d12-185a46c0567d", Controller:(*bool)(0xc0055aa7ca), BlockOwnerDeletion:(*bool)(0xc0055aa7cb)}} Sep 15 11:17:53.881: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"18ce5c74-2229-48ff-85aa-2fc0d6cf9837", Controller:(*bool)(0xc0046c7512), BlockOwnerDeletion:(*bool)(0xc0046c7513)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:17:58.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1168" for this suite. • [SLOW TEST:5.426 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":168,"skipped":2883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:17:59.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:17:59.223: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-39b2ca3a-06af-40f7-9a52-db4bae463200" in namespace "security-context-test-1209" to be "Succeeded or Failed" Sep 15 11:17:59.226: INFO: Pod "alpine-nnp-false-39b2ca3a-06af-40f7-9a52-db4bae463200": Phase="Pending", Reason="", readiness=false. Elapsed: 3.47982ms Sep 15 11:18:01.283: INFO: Pod "alpine-nnp-false-39b2ca3a-06af-40f7-9a52-db4bae463200": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060368333s Sep 15 11:18:03.358: INFO: Pod "alpine-nnp-false-39b2ca3a-06af-40f7-9a52-db4bae463200": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13501709s Sep 15 11:18:05.365: INFO: Pod "alpine-nnp-false-39b2ca3a-06af-40f7-9a52-db4bae463200": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142225362s Sep 15 11:18:07.370: INFO: Pod "alpine-nnp-false-39b2ca3a-06af-40f7-9a52-db4bae463200": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.147131614s Sep 15 11:18:07.370: INFO: Pod "alpine-nnp-false-39b2ca3a-06af-40f7-9a52-db4bae463200" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:18:07.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1209" for this suite. • [SLOW TEST:8.318 seconds] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2914,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:18:07.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8728 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8728 STEP: creating replication controller externalsvc in namespace services-8728 I0915 11:18:07.598560 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8728, replica count: 2 I0915 11:18:10.648917 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 11:18:13.649179 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Sep 15 11:18:13.688: INFO: Creating new exec pod Sep 15 11:18:17.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-8728 execpod4vlj7 -- /bin/sh -x -c nslookup clusterip-service.services-8728.svc.cluster.local' Sep 15 11:18:20.967: INFO: stderr: "I0915 11:18:20.865572 2222 log.go:181] (0xc0004da000) (0xc0004c6000) Create stream\nI0915 11:18:20.865655 2222 log.go:181] (0xc0004da000) (0xc0004c6000) Stream added, broadcasting: 1\nI0915 11:18:20.868357 2222 log.go:181] (0xc0004da000) Reply frame received for 1\nI0915 11:18:20.868390 2222 log.go:181] (0xc0004da000) (0xc000c14140) Create stream\nI0915 11:18:20.868402 2222 log.go:181] (0xc0004da000) (0xc000c14140) Stream added, broadcasting: 3\nI0915 11:18:20.869678 2222 log.go:181] (0xc0004da000) Reply frame received for 3\nI0915 11:18:20.869716 2222 log.go:181] (0xc0004da000) (0xc0004c60a0) Create stream\nI0915 11:18:20.869728 2222 log.go:181] (0xc0004da000) (0xc0004c60a0) Stream added, broadcasting: 5\nI0915 11:18:20.870659 2222 log.go:181] (0xc0004da000) Reply frame received for 5\nI0915 11:18:20.946149 2222 log.go:181] (0xc0004da000) Data frame received for 5\nI0915 11:18:20.946180 2222 log.go:181] (0xc0004c60a0) (5) Data frame handling\nI0915 11:18:20.946200 2222 log.go:181] (0xc0004c60a0) (5) Data frame sent\n+ nslookup clusterip-service.services-8728.svc.cluster.local\nI0915 11:18:20.958439 2222 log.go:181] (0xc0004da000) Data frame received for 3\nI0915 11:18:20.958466 2222 log.go:181] (0xc000c14140) (3) Data frame handling\nI0915 11:18:20.958486 2222 log.go:181] (0xc000c14140) (3) Data frame sent\nI0915 11:18:20.959746 2222 log.go:181] (0xc0004da000) Data frame received for 3\nI0915 11:18:20.959864 2222 log.go:181] (0xc000c14140) (3) Data frame handling\nI0915 11:18:20.959900 2222 log.go:181] (0xc000c14140) (3) Data frame sent\nI0915 11:18:20.960385 2222 log.go:181] (0xc0004da000) Data frame received for 3\nI0915 11:18:20.960416 2222 log.go:181] (0xc000c14140) (3) Data frame handling\nI0915 11:18:20.960574 2222 log.go:181] (0xc0004da000) Data frame received for 5\nI0915 11:18:20.960594 2222 log.go:181] (0xc0004c60a0) (5) Data frame handling\nI0915 11:18:20.962377 2222 log.go:181] (0xc0004da000) Data frame received for 1\nI0915 11:18:20.962413 2222 log.go:181] (0xc0004c6000) (1) Data frame handling\nI0915 11:18:20.962436 2222 log.go:181] (0xc0004c6000) (1) Data frame sent\nI0915 11:18:20.962469 2222 log.go:181] (0xc0004da000) (0xc0004c6000) Stream removed, broadcasting: 1\nI0915 11:18:20.962497 2222 log.go:181] (0xc0004da000) Go away received\nI0915 11:18:20.962946 2222 log.go:181] (0xc0004da000) (0xc0004c6000) Stream removed, broadcasting: 1\nI0915 11:18:20.962965 2222 log.go:181] (0xc0004da000) (0xc000c14140) Stream removed, broadcasting: 3\nI0915 11:18:20.962975 2222 log.go:181] (0xc0004da000) (0xc0004c60a0) Stream removed, broadcasting: 5\n" Sep 15 11:18:20.967: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8728.svc.cluster.local\tcanonical name = externalsvc.services-8728.svc.cluster.local.\nName:\texternalsvc.services-8728.svc.cluster.local\nAddress: 10.105.131.201\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8728, will wait for the garbage collector to delete the pods Sep 15 11:18:21.046: INFO: Deleting ReplicationController externalsvc took: 24.157595ms Sep 15 11:18:21.446: INFO: Terminating ReplicationController externalsvc pods took: 400.188436ms Sep 15 11:18:33.305: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:18:33.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8728" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:25.943 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":170,"skipped":2929,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:18:33.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Sep 15 11:18:33.941: INFO: created pod pod-service-account-defaultsa Sep 15 11:18:33.941: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 15 11:18:33.996: INFO: created pod pod-service-account-mountsa Sep 15 11:18:33.996: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 15 11:18:34.008: INFO: created pod pod-service-account-nomountsa Sep 15 11:18:34.008: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 15 11:18:34.033: INFO: created pod pod-service-account-defaultsa-mountspec Sep 15 11:18:34.034: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 15 11:18:34.073: INFO: created pod pod-service-account-mountsa-mountspec Sep 15 11:18:34.073: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 15 11:18:34.152: INFO: created pod pod-service-account-nomountsa-mountspec Sep 15 11:18:34.153: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 15 11:18:34.176: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 15 11:18:34.176: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 15 11:18:34.224: INFO: created pod pod-service-account-mountsa-nomountspec Sep 15 11:18:34.224: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 15 11:18:34.243: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 15 11:18:34.243: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:18:34.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2884" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":171,"skipped":2943,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:18:34.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 15 11:18:34.601: INFO: Waiting up to 5m0s for pod "pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a" in namespace "emptydir-7860" to be "Succeeded or Failed" Sep 15 11:18:34.637: INFO: Pod "pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.742701ms Sep 15 11:18:36.787: INFO: Pod "pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18623448s Sep 15 11:18:39.109: INFO: Pod "pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.507949087s Sep 15 11:18:41.429: INFO: Pod "pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.828022932s Sep 15 11:18:44.842: INFO: Pod "pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.240800049s Sep 15 11:18:46.900: INFO: Pod "pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.299041013s STEP: Saw pod success Sep 15 11:18:46.900: INFO: Pod "pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a" satisfied condition "Succeeded or Failed" Sep 15 11:18:46.903: INFO: Trying to get logs from node kali-worker pod pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a container test-container: STEP: delete the pod Sep 15 11:18:47.357: INFO: Waiting for pod pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a to disappear Sep 15 11:18:47.511: INFO: Pod pod-a0a95f88-7ba7-49d9-87cb-4b2d9ccd378a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:18:47.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7860" for this suite. • [SLOW TEST:13.118 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":172,"skipped":2956,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:18:47.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:18:47.654: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:18:53.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1177" for this suite. • [SLOW TEST:6.333 seconds] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":173,"skipped":2968,"failed":0} [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:18:53.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Sep 15 11:18:53.918: INFO: created test-event-1 Sep 15 11:18:53.936: INFO: created test-event-2 Sep 15 11:18:53.956: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Sep 15 11:18:53.972: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Sep 15 11:18:53.992: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:18:53.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5677" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":174,"skipped":2968,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:18:54.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3556 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 15 11:18:54.127: INFO: Found 0 stateful pods, waiting for 3 Sep 15 11:19:04.134: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:19:04.134: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:19:04.134: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 15 11:19:14.149: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:19:14.150: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:19:14.150: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 15 11:19:14.206: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Sep 15 11:19:24.258: INFO: Updating stateful set ss2 Sep 15 11:19:24.364: INFO: Waiting for Pod statefulset-3556/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Sep 15 11:19:35.136: INFO: Found 2 stateful pods, waiting for 3 Sep 15 11:19:45.141: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:19:45.141: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 15 11:19:45.141: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Sep 15 11:19:45.167: INFO: Updating stateful set ss2 Sep 15 11:19:45.174: INFO: Waiting for Pod statefulset-3556/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 15 11:19:55.198: INFO: Updating stateful set ss2 Sep 15 11:19:55.230: INFO: Waiting for StatefulSet statefulset-3556/ss2 to complete update Sep 15 11:19:55.230: INFO: Waiting for Pod statefulset-3556/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 15 11:20:05.238: INFO: Waiting for StatefulSet statefulset-3556/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 15 11:20:15.240: INFO: Deleting all statefulset in ns statefulset-3556 Sep 15 11:20:15.243: INFO: Scaling statefulset ss2 to 0 Sep 15 11:20:35.312: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 11:20:35.387: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:20:35.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3556" for this suite. • [SLOW TEST:101.411 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":175,"skipped":2983,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:20:35.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-28a57801-336c-4b2d-a729-97d23f90cb84 STEP: Creating secret with name s-test-opt-upd-d23f0017-3462-4bf8-b44a-eac24a000490 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-28a57801-336c-4b2d-a729-97d23f90cb84 STEP: Updating secret s-test-opt-upd-d23f0017-3462-4bf8-b44a-eac24a000490 STEP: Creating secret with name s-test-opt-create-de9be4ed-7386-4856-9c9d-54f8467de4db STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:22:00.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5452" for this suite. • [SLOW TEST:84.694 seconds] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":176,"skipped":3000,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:22:00.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Sep 15 11:22:00.220: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7991 /api/v1/namespaces/watch-7991/configmaps/e2e-watch-test-watch-closed ed188e20-5228-47f6-985e-9019ef2b21ac 448915 0 2020-09-15 11:22:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-15 11:22:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:22:00.220: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7991 /api/v1/namespaces/watch-7991/configmaps/e2e-watch-test-watch-closed ed188e20-5228-47f6-985e-9019ef2b21ac 448916 0 2020-09-15 11:22:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-15 11:22:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Sep 15 11:22:00.231: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7991 /api/v1/namespaces/watch-7991/configmaps/e2e-watch-test-watch-closed ed188e20-5228-47f6-985e-9019ef2b21ac 448917 0 2020-09-15 11:22:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-15 11:22:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:22:00.231: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7991 /api/v1/namespaces/watch-7991/configmaps/e2e-watch-test-watch-closed ed188e20-5228-47f6-985e-9019ef2b21ac 448918 0 2020-09-15 11:22:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-15 11:22:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:22:00.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7991" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":177,"skipped":3029,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:22:00.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Sep 15 11:22:00.287: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Sep 15 11:22:12.057: INFO: >>> kubeConfig: /root/.kube/config Sep 15 11:22:15.034: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:22:26.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6242" for this suite. • [SLOW TEST:26.581 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":178,"skipped":3029,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:22:26.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:22:43.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3654" for this suite. • [SLOW TEST:16.186 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":179,"skipped":3053,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:22:43.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:22:47.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4450" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":180,"skipped":3079,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:22:47.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 15 11:22:47.271: INFO: namespace kubectl-6371 Sep 15 11:22:47.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6371' Sep 15 11:22:47.635: INFO: stderr: "" Sep 15 11:22:47.635: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 15 11:22:48.640: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:22:48.640: INFO: Found 0 / 1 Sep 15 11:22:49.647: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:22:49.647: INFO: Found 0 / 1 Sep 15 11:22:50.639: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:22:50.639: INFO: Found 1 / 1 Sep 15 11:22:50.639: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 15 11:22:50.642: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:22:50.642: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 15 11:22:50.642: INFO: wait on agnhost-primary startup in kubectl-6371 Sep 15 11:22:50.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs agnhost-primary-rffp4 agnhost-primary --namespace=kubectl-6371' Sep 15 11:22:50.758: INFO: stderr: "" Sep 15 11:22:50.758: INFO: stdout: "Paused\n" STEP: exposing RC Sep 15 11:22:50.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6371' Sep 15 11:22:50.908: INFO: stderr: "" Sep 15 11:22:50.908: INFO: stdout: "service/rm2 exposed\n" Sep 15 11:22:50.948: INFO: Service rm2 in namespace kubectl-6371 found. STEP: exposing service Sep 15 11:22:52.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6371' Sep 15 11:22:53.093: INFO: stderr: "" Sep 15 11:22:53.093: INFO: stdout: "service/rm3 exposed\n" Sep 15 11:22:53.097: INFO: Service rm3 in namespace kubectl-6371 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:22:55.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6371" for this suite. • [SLOW TEST:7.927 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":181,"skipped":3079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:22:55.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Sep 15 11:24:55.794: INFO: Successfully updated pod "var-expansion-7912efe9-d963-4070-9693-318179006f8c" STEP: waiting for pod running STEP: deleting the pod gracefully Sep 15 11:24:59.819: INFO: Deleting pod "var-expansion-7912efe9-d963-4070-9693-318179006f8c" in namespace "var-expansion-5918" Sep 15 11:24:59.824: INFO: Wait up to 5m0s for pod "var-expansion-7912efe9-d963-4070-9693-318179006f8c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:25:33.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5918" for this suite. • [SLOW TEST:158.749 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":182,"skipped":3107,"failed":0} SS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:25:33.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:25:49.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7437" for this suite. • [SLOW TEST:16.098 seconds] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":183,"skipped":3109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:25:49.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:27:50.066: INFO: Deleting pod "var-expansion-3d8f1a31-0c0c-42e6-bb30-e05be159f9f4" in namespace "var-expansion-3217" Sep 15 11:27:50.071: INFO: Wait up to 5m0s for pod "var-expansion-3d8f1a31-0c0c-42e6-bb30-e05be159f9f4" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:27:54.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3217" for this suite. • [SLOW TEST:124.145 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":184,"skipped":3138,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:27:54.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-t27q STEP: Creating a pod to test atomic-volume-subpath Sep 15 11:27:54.210: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t27q" in namespace "subpath-6347" to be "Succeeded or Failed" Sep 15 11:27:54.230: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Pending", Reason="", readiness=false. Elapsed: 20.238322ms Sep 15 11:27:56.234: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024814998s Sep 15 11:27:58.240: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 4.029934619s Sep 15 11:28:00.244: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 6.034323604s Sep 15 11:28:02.249: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 8.039191587s Sep 15 11:28:04.253: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 10.043708601s Sep 15 11:28:06.257: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 12.047454313s Sep 15 11:28:08.262: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 14.052284852s Sep 15 11:28:10.266: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 16.056546628s Sep 15 11:28:12.271: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 18.06119326s Sep 15 11:28:14.275: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 20.065412009s Sep 15 11:28:16.278: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Running", Reason="", readiness=true. Elapsed: 22.068572962s Sep 15 11:28:19.622: INFO: Pod "pod-subpath-test-configmap-t27q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.412780054s STEP: Saw pod success Sep 15 11:28:19.622: INFO: Pod "pod-subpath-test-configmap-t27q" satisfied condition "Succeeded or Failed" Sep 15 11:28:19.625: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-t27q container test-container-subpath-configmap-t27q: STEP: delete the pod Sep 15 11:28:19.891: INFO: Waiting for pod pod-subpath-test-configmap-t27q to disappear Sep 15 11:28:19.908: INFO: Pod pod-subpath-test-configmap-t27q no longer exists STEP: Deleting pod pod-subpath-test-configmap-t27q Sep 15 11:28:19.908: INFO: Deleting pod "pod-subpath-test-configmap-t27q" in namespace "subpath-6347" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:28:19.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6347" for this suite. • [SLOW TEST:25.809 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":185,"skipped":3146,"failed":0} [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:28:19.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 11:28:20.026: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d68edb9d-e9fd-4983-8e53-c3f8ba0a4b59" in namespace "downward-api-3668" to be "Succeeded or Failed" Sep 15 11:28:20.040: INFO: Pod "downwardapi-volume-d68edb9d-e9fd-4983-8e53-c3f8ba0a4b59": Phase="Pending", Reason="", readiness=false. Elapsed: 13.784936ms Sep 15 11:28:22.122: INFO: Pod "downwardapi-volume-d68edb9d-e9fd-4983-8e53-c3f8ba0a4b59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095876304s Sep 15 11:28:24.126: INFO: Pod "downwardapi-volume-d68edb9d-e9fd-4983-8e53-c3f8ba0a4b59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100371005s STEP: Saw pod success Sep 15 11:28:24.126: INFO: Pod "downwardapi-volume-d68edb9d-e9fd-4983-8e53-c3f8ba0a4b59" satisfied condition "Succeeded or Failed" Sep 15 11:28:24.129: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d68edb9d-e9fd-4983-8e53-c3f8ba0a4b59 container client-container: STEP: delete the pod Sep 15 11:28:24.186: INFO: Waiting for pod downwardapi-volume-d68edb9d-e9fd-4983-8e53-c3f8ba0a4b59 to disappear Sep 15 11:28:24.201: INFO: Pod downwardapi-volume-d68edb9d-e9fd-4983-8e53-c3f8ba0a4b59 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:28:24.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3668" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":186,"skipped":3146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:28:24.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 11:28:24.320: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71a47dfa-1443-4c51-a73e-021d024a634d" in namespace "projected-111" to be "Succeeded or Failed" Sep 15 11:28:24.344: INFO: Pod "downwardapi-volume-71a47dfa-1443-4c51-a73e-021d024a634d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.047385ms Sep 15 11:28:26.400: INFO: Pod "downwardapi-volume-71a47dfa-1443-4c51-a73e-021d024a634d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079786432s Sep 15 11:28:28.405: INFO: Pod "downwardapi-volume-71a47dfa-1443-4c51-a73e-021d024a634d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084263554s STEP: Saw pod success Sep 15 11:28:28.405: INFO: Pod "downwardapi-volume-71a47dfa-1443-4c51-a73e-021d024a634d" satisfied condition "Succeeded or Failed" Sep 15 11:28:28.407: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-71a47dfa-1443-4c51-a73e-021d024a634d container client-container: STEP: delete the pod Sep 15 11:28:28.667: INFO: Waiting for pod downwardapi-volume-71a47dfa-1443-4c51-a73e-021d024a634d to disappear Sep 15 11:28:28.675: INFO: Pod downwardapi-volume-71a47dfa-1443-4c51-a73e-021d024a634d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:28:28.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-111" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":187,"skipped":3202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:28:28.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:28:39.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9210" for this suite. • [SLOW TEST:11.179 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":188,"skipped":3235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:28:39.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 15 11:28:45.109: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:28:45.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-667" for this suite. • [SLOW TEST:5.347 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":189,"skipped":3290,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:28:45.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:28:45.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2437" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":190,"skipped":3298,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:28:45.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 15 11:28:45.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9917' Sep 15 11:28:51.589: INFO: stderr: "" Sep 15 11:28:51.590: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Sep 15 11:28:51.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9917' Sep 15 11:28:56.012: INFO: stderr: "" Sep 15 11:28:56.012: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:28:56.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9917" for this suite. • [SLOW TEST:10.615 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":191,"skipped":3298,"failed":0} [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:28:56.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:28:56.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3750" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":192,"skipped":3298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:28:56.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Sep 15 11:28:56.347: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Sep 15 11:28:56.376: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 15 11:28:56.376: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Sep 15 11:28:56.388: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 15 11:28:56.388: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Sep 15 11:28:56.423: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Sep 15 11:28:56.423: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Sep 15 11:29:03.761: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:29:03.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2132" for this suite. • [SLOW TEST:7.655 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":193,"skipped":3323,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:29:03.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:29:03.993: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:29:05.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1149" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":194,"skipped":3332,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:29:05.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 15 11:29:05.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8917' Sep 15 11:29:05.499: INFO: stderr: "" Sep 15 11:29:05.499: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Sep 15 11:29:05.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-8917' Sep 15 11:29:05.644: INFO: stderr: "" Sep 15 11:29:05.644: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-15T11:29:05Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-15T11:29:05Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8917\",\n \"resourceVersion\": \"450763\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8917/pods/e2e-test-httpd-pod\",\n \"uid\": \"19a91f88-afab-4347-ba3c-58d1c0363514\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-h24xl\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-h24xl\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-h24xl\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-15T11:29:05Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Sep 15 11:29:05.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-8917' Sep 15 11:29:06.182: INFO: stderr: "W0915 11:29:05.719515 2385 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Sep 15 11:29:06.183: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Sep 15 11:29:06.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8917' Sep 15 11:29:08.039: INFO: stderr: "" Sep 15 11:29:08.039: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:29:08.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8917" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":195,"skipped":3346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:29:08.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:29:43.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5839" for this suite. • [SLOW TEST:35.344 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":196,"skipped":3384,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:29:43.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a7dc250d-a818-4923-aab4-31bd2603c2f1 STEP: Creating a pod to test consume secrets Sep 15 11:29:43.497: INFO: Waiting up to 5m0s for pod "pod-secrets-7d740a6e-ef18-4b52-aa72-7bd6e4051a61" in namespace "secrets-5940" to be "Succeeded or Failed" Sep 15 11:29:43.549: INFO: Pod "pod-secrets-7d740a6e-ef18-4b52-aa72-7bd6e4051a61": Phase="Pending", Reason="", readiness=false. Elapsed: 51.814853ms Sep 15 11:29:45.553: INFO: Pod "pod-secrets-7d740a6e-ef18-4b52-aa72-7bd6e4051a61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056198619s Sep 15 11:29:47.559: INFO: Pod "pod-secrets-7d740a6e-ef18-4b52-aa72-7bd6e4051a61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061580136s STEP: Saw pod success Sep 15 11:29:47.559: INFO: Pod "pod-secrets-7d740a6e-ef18-4b52-aa72-7bd6e4051a61" satisfied condition "Succeeded or Failed" Sep 15 11:29:47.562: INFO: Trying to get logs from node kali-worker pod pod-secrets-7d740a6e-ef18-4b52-aa72-7bd6e4051a61 container secret-volume-test: STEP: delete the pod Sep 15 11:29:47.592: INFO: Waiting for pod pod-secrets-7d740a6e-ef18-4b52-aa72-7bd6e4051a61 to disappear Sep 15 11:29:47.609: INFO: Pod pod-secrets-7d740a6e-ef18-4b52-aa72-7bd6e4051a61 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:29:47.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5940" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":197,"skipped":3394,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:29:47.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0915 11:29:48.794751 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 15 11:30:50.899: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:30:50.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9818" for this suite. • [SLOW TEST:63.295 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":198,"skipped":3403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:30:50.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4006 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 15 11:30:51.017: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 15 11:30:51.147: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:30:53.206: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:30:55.151: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:30:57.152: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:30:59.152: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:01.152: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:03.151: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:05.151: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:07.151: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:09.151: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:11.151: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:13.151: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 15 11:31:13.155: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 15 11:31:17.183: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.170:8080/dial?request=hostname&protocol=http&host=10.244.1.169&port=8080&tries=1'] Namespace:pod-network-test-4006 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:31:17.183: INFO: >>> kubeConfig: /root/.kube/config I0915 11:31:17.224333 7 log.go:181] (0xc0055e4160) (0xc000d39b80) Create stream I0915 11:31:17.224360 7 log.go:181] (0xc0055e4160) (0xc000d39b80) Stream added, broadcasting: 1 I0915 11:31:17.227188 7 log.go:181] (0xc0055e4160) Reply frame received for 1 I0915 11:31:17.227247 7 log.go:181] (0xc0055e4160) (0xc0047d6000) Create stream I0915 11:31:17.227332 7 log.go:181] (0xc0055e4160) (0xc0047d6000) Stream added, broadcasting: 3 I0915 11:31:17.228499 7 log.go:181] (0xc0055e4160) Reply frame received for 3 I0915 11:31:17.228546 7 log.go:181] (0xc0055e4160) (0xc0047d60a0) Create stream I0915 11:31:17.228558 7 log.go:181] (0xc0055e4160) (0xc0047d60a0) Stream added, broadcasting: 5 I0915 11:31:17.229619 7 log.go:181] (0xc0055e4160) Reply frame received for 5 I0915 11:31:17.298931 7 log.go:181] (0xc0055e4160) Data frame received for 3 I0915 11:31:17.298969 7 log.go:181] (0xc0047d6000) (3) Data frame handling I0915 11:31:17.299002 7 log.go:181] (0xc0047d6000) (3) Data frame sent I0915 11:31:17.299754 7 log.go:181] (0xc0055e4160) Data frame received for 3 I0915 11:31:17.299804 7 log.go:181] (0xc0047d6000) (3) Data frame handling I0915 11:31:17.299902 7 log.go:181] (0xc0055e4160) Data frame received for 5 I0915 11:31:17.299930 7 log.go:181] (0xc0047d60a0) (5) Data frame handling I0915 11:31:17.302112 7 log.go:181] (0xc0055e4160) Data frame received for 1 I0915 11:31:17.302149 7 log.go:181] (0xc000d39b80) (1) Data frame handling I0915 11:31:17.302165 7 log.go:181] (0xc000d39b80) (1) Data frame sent I0915 11:31:17.302182 7 log.go:181] (0xc0055e4160) (0xc000d39b80) Stream removed, broadcasting: 1 I0915 11:31:17.302206 7 log.go:181] (0xc0055e4160) Go away received I0915 11:31:17.302504 7 log.go:181] (0xc0055e4160) (0xc000d39b80) Stream removed, broadcasting: 1 I0915 11:31:17.302533 7 log.go:181] (0xc0055e4160) (0xc0047d6000) Stream removed, broadcasting: 3 I0915 11:31:17.302548 7 log.go:181] (0xc0055e4160) (0xc0047d60a0) Stream removed, broadcasting: 5 Sep 15 11:31:17.302: INFO: Waiting for responses: map[] Sep 15 11:31:17.306: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.170:8080/dial?request=hostname&protocol=http&host=10.244.2.177&port=8080&tries=1'] Namespace:pod-network-test-4006 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:31:17.306: INFO: >>> kubeConfig: /root/.kube/config I0915 11:31:17.338735 7 log.go:181] (0xc002f9abb0) (0xc001902640) Create stream I0915 11:31:17.338757 7 log.go:181] (0xc002f9abb0) (0xc001902640) Stream added, broadcasting: 1 I0915 11:31:17.341621 7 log.go:181] (0xc002f9abb0) Reply frame received for 1 I0915 11:31:17.341659 7 log.go:181] (0xc002f9abb0) (0xc0047d6140) Create stream I0915 11:31:17.341672 7 log.go:181] (0xc002f9abb0) (0xc0047d6140) Stream added, broadcasting: 3 I0915 11:31:17.342715 7 log.go:181] (0xc002f9abb0) Reply frame received for 3 I0915 11:31:17.342758 7 log.go:181] (0xc002f9abb0) (0xc0047d61e0) Create stream I0915 11:31:17.342777 7 log.go:181] (0xc002f9abb0) (0xc0047d61e0) Stream added, broadcasting: 5 I0915 11:31:17.343883 7 log.go:181] (0xc002f9abb0) Reply frame received for 5 I0915 11:31:17.416026 7 log.go:181] (0xc002f9abb0) Data frame received for 3 I0915 11:31:17.416060 7 log.go:181] (0xc0047d6140) (3) Data frame handling I0915 11:31:17.416077 7 log.go:181] (0xc0047d6140) (3) Data frame sent I0915 11:31:17.416703 7 log.go:181] (0xc002f9abb0) Data frame received for 3 I0915 11:31:17.416730 7 log.go:181] (0xc0047d6140) (3) Data frame handling I0915 11:31:17.416877 7 log.go:181] (0xc002f9abb0) Data frame received for 5 I0915 11:31:17.416897 7 log.go:181] (0xc0047d61e0) (5) Data frame handling I0915 11:31:17.418706 7 log.go:181] (0xc002f9abb0) Data frame received for 1 I0915 11:31:17.418727 7 log.go:181] (0xc001902640) (1) Data frame handling I0915 11:31:17.418741 7 log.go:181] (0xc001902640) (1) Data frame sent I0915 11:31:17.418750 7 log.go:181] (0xc002f9abb0) (0xc001902640) Stream removed, broadcasting: 1 I0915 11:31:17.418814 7 log.go:181] (0xc002f9abb0) Go away received I0915 11:31:17.418850 7 log.go:181] (0xc002f9abb0) (0xc001902640) Stream removed, broadcasting: 1 I0915 11:31:17.418873 7 log.go:181] (0xc002f9abb0) (0xc0047d6140) Stream removed, broadcasting: 3 I0915 11:31:17.418899 7 log.go:181] (0xc002f9abb0) (0xc0047d61e0) Stream removed, broadcasting: 5 Sep 15 11:31:17.418: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:31:17.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4006" for this suite. • [SLOW TEST:26.516 seconds] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":3521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:31:17.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5420 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 15 11:31:17.518: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 15 11:31:17.580: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:31:19.585: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:31:21.598: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:23.584: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:25.584: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:27.586: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:29.585: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:31.585: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:33.584: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:31:35.585: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 15 11:31:35.591: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 15 11:31:39.673: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.171:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5420 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:31:39.673: INFO: >>> kubeConfig: /root/.kube/config I0915 11:31:39.719379 7 log.go:181] (0xc002b8e4d0) (0xc0022d3b80) Create stream I0915 11:31:39.719411 7 log.go:181] (0xc002b8e4d0) (0xc0022d3b80) Stream added, broadcasting: 1 I0915 11:31:39.721833 7 log.go:181] (0xc002b8e4d0) Reply frame received for 1 I0915 11:31:39.721873 7 log.go:181] (0xc002b8e4d0) (0xc000d39c20) Create stream I0915 11:31:39.721892 7 log.go:181] (0xc002b8e4d0) (0xc000d39c20) Stream added, broadcasting: 3 I0915 11:31:39.723096 7 log.go:181] (0xc002b8e4d0) Reply frame received for 3 I0915 11:31:39.723137 7 log.go:181] (0xc002b8e4d0) (0xc0014acaa0) Create stream I0915 11:31:39.723147 7 log.go:181] (0xc002b8e4d0) (0xc0014acaa0) Stream added, broadcasting: 5 I0915 11:31:39.724400 7 log.go:181] (0xc002b8e4d0) Reply frame received for 5 I0915 11:31:39.795021 7 log.go:181] (0xc002b8e4d0) Data frame received for 5 I0915 11:31:39.795057 7 log.go:181] (0xc0014acaa0) (5) Data frame handling I0915 11:31:39.795095 7 log.go:181] (0xc002b8e4d0) Data frame received for 3 I0915 11:31:39.795131 7 log.go:181] (0xc000d39c20) (3) Data frame handling I0915 11:31:39.795160 7 log.go:181] (0xc000d39c20) (3) Data frame sent I0915 11:31:39.795248 7 log.go:181] (0xc002b8e4d0) Data frame received for 3 I0915 11:31:39.795281 7 log.go:181] (0xc000d39c20) (3) Data frame handling I0915 11:31:39.797281 7 log.go:181] (0xc002b8e4d0) Data frame received for 1 I0915 11:31:39.797299 7 log.go:181] (0xc0022d3b80) (1) Data frame handling I0915 11:31:39.797315 7 log.go:181] (0xc0022d3b80) (1) Data frame sent I0915 11:31:39.797333 7 log.go:181] (0xc002b8e4d0) (0xc0022d3b80) Stream removed, broadcasting: 1 I0915 11:31:39.797352 7 log.go:181] (0xc002b8e4d0) Go away received I0915 11:31:39.797445 7 log.go:181] (0xc002b8e4d0) (0xc0022d3b80) Stream removed, broadcasting: 1 I0915 11:31:39.797480 7 log.go:181] (0xc002b8e4d0) (0xc000d39c20) Stream removed, broadcasting: 3 I0915 11:31:39.797505 7 log.go:181] (0xc002b8e4d0) (0xc0014acaa0) Stream removed, broadcasting: 5 Sep 15 11:31:39.797: INFO: Found all expected endpoints: [netserver-0] Sep 15 11:31:39.801: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.178:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5420 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:31:39.801: INFO: >>> kubeConfig: /root/.kube/config I0915 11:31:39.830793 7 log.go:181] (0xc002b8ed10) (0xc00084e320) Create stream I0915 11:31:39.830825 7 log.go:181] (0xc002b8ed10) (0xc00084e320) Stream added, broadcasting: 1 I0915 11:31:39.833132 7 log.go:181] (0xc002b8ed10) Reply frame received for 1 I0915 11:31:39.833182 7 log.go:181] (0xc002b8ed10) (0xc0047d6280) Create stream I0915 11:31:39.833202 7 log.go:181] (0xc002b8ed10) (0xc0047d6280) Stream added, broadcasting: 3 I0915 11:31:39.834074 7 log.go:181] (0xc002b8ed10) Reply frame received for 3 I0915 11:31:39.834119 7 log.go:181] (0xc002b8ed10) (0xc0037f5180) Create stream I0915 11:31:39.834135 7 log.go:181] (0xc002b8ed10) (0xc0037f5180) Stream added, broadcasting: 5 I0915 11:31:39.835022 7 log.go:181] (0xc002b8ed10) Reply frame received for 5 I0915 11:31:39.905156 7 log.go:181] (0xc002b8ed10) Data frame received for 3 I0915 11:31:39.905185 7 log.go:181] (0xc0047d6280) (3) Data frame handling I0915 11:31:39.905210 7 log.go:181] (0xc0047d6280) (3) Data frame sent I0915 11:31:39.905224 7 log.go:181] (0xc002b8ed10) Data frame received for 3 I0915 11:31:39.905234 7 log.go:181] (0xc0047d6280) (3) Data frame handling I0915 11:31:39.905412 7 log.go:181] (0xc002b8ed10) Data frame received for 5 I0915 11:31:39.905450 7 log.go:181] (0xc0037f5180) (5) Data frame handling I0915 11:31:39.907053 7 log.go:181] (0xc002b8ed10) Data frame received for 1 I0915 11:31:39.907087 7 log.go:181] (0xc00084e320) (1) Data frame handling I0915 11:31:39.907107 7 log.go:181] (0xc00084e320) (1) Data frame sent I0915 11:31:39.907135 7 log.go:181] (0xc002b8ed10) (0xc00084e320) Stream removed, broadcasting: 1 I0915 11:31:39.907153 7 log.go:181] (0xc002b8ed10) Go away received I0915 11:31:39.907249 7 log.go:181] (0xc002b8ed10) (0xc00084e320) Stream removed, broadcasting: 1 I0915 11:31:39.907273 7 log.go:181] (0xc002b8ed10) (0xc0047d6280) Stream removed, broadcasting: 3 I0915 11:31:39.907291 7 log.go:181] (0xc002b8ed10) (0xc0037f5180) Stream removed, broadcasting: 5 Sep 15 11:31:39.907: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:31:39.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5420" for this suite. • [SLOW TEST:22.487 seconds] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":200,"skipped":3547,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:31:39.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4044 STEP: creating replication controller nodeport-test in namespace services-4044 I0915 11:31:40.066245 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4044, replica count: 2 I0915 11:31:43.116757 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 11:31:46.117038 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 11:31:46.117: INFO: Creating new exec pod Sep 15 11:31:51.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpodzcq46 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Sep 15 11:31:51.408: INFO: stderr: "I0915 11:31:51.304220 2417 log.go:181] (0xc00056cfd0) (0xc000386aa0) Create stream\nI0915 11:31:51.304271 2417 log.go:181] (0xc00056cfd0) (0xc000386aa0) Stream added, broadcasting: 1\nI0915 11:31:51.309407 2417 log.go:181] (0xc00056cfd0) Reply frame received for 1\nI0915 11:31:51.309443 2417 log.go:181] (0xc00056cfd0) (0xc00059c6e0) Create stream\nI0915 11:31:51.309454 2417 log.go:181] (0xc00056cfd0) (0xc00059c6e0) Stream added, broadcasting: 3\nI0915 11:31:51.310434 2417 log.go:181] (0xc00056cfd0) Reply frame received for 3\nI0915 11:31:51.310495 2417 log.go:181] (0xc00056cfd0) (0xc0003875e0) Create stream\nI0915 11:31:51.310509 2417 log.go:181] (0xc00056cfd0) (0xc0003875e0) Stream added, broadcasting: 5\nI0915 11:31:51.311397 2417 log.go:181] (0xc00056cfd0) Reply frame received for 5\nI0915 11:31:51.400436 2417 log.go:181] (0xc00056cfd0) Data frame received for 5\nI0915 11:31:51.400467 2417 log.go:181] (0xc00056cfd0) Data frame received for 3\nI0915 11:31:51.400481 2417 log.go:181] (0xc00059c6e0) (3) Data frame handling\nI0915 11:31:51.400507 2417 log.go:181] (0xc0003875e0) (5) Data frame handling\nI0915 11:31:51.400567 2417 log.go:181] (0xc0003875e0) (5) Data frame sent\nI0915 11:31:51.400582 2417 log.go:181] (0xc00056cfd0) Data frame received for 5\nI0915 11:31:51.400593 2417 log.go:181] (0xc0003875e0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0915 11:31:51.400622 2417 log.go:181] (0xc0003875e0) (5) Data frame sent\nI0915 11:31:51.400745 2417 log.go:181] (0xc00056cfd0) Data frame received for 5\nI0915 11:31:51.400774 2417 log.go:181] (0xc0003875e0) (5) Data frame handling\nI0915 11:31:51.402567 2417 log.go:181] (0xc00056cfd0) Data frame received for 1\nI0915 11:31:51.402603 2417 log.go:181] (0xc000386aa0) (1) Data frame handling\nI0915 11:31:51.402627 2417 log.go:181] (0xc000386aa0) (1) Data frame sent\nI0915 11:31:51.402651 2417 log.go:181] (0xc00056cfd0) (0xc000386aa0) Stream removed, broadcasting: 1\nI0915 11:31:51.402681 2417 log.go:181] (0xc00056cfd0) Go away received\nI0915 11:31:51.403140 2417 log.go:181] (0xc00056cfd0) (0xc000386aa0) Stream removed, broadcasting: 1\nI0915 11:31:51.403164 2417 log.go:181] (0xc00056cfd0) (0xc00059c6e0) Stream removed, broadcasting: 3\nI0915 11:31:51.403177 2417 log.go:181] (0xc00056cfd0) (0xc0003875e0) Stream removed, broadcasting: 5\n" Sep 15 11:31:51.408: INFO: stdout: "" Sep 15 11:31:51.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpodzcq46 -- /bin/sh -x -c nc -zv -t -w 2 10.109.118.22 80' Sep 15 11:31:51.629: INFO: stderr: "I0915 11:31:51.535556 2435 log.go:181] (0xc0002176b0) (0xc000bf4820) Create stream\nI0915 11:31:51.535602 2435 log.go:181] (0xc0002176b0) (0xc000bf4820) Stream added, broadcasting: 1\nI0915 11:31:51.537231 2435 log.go:181] (0xc0002176b0) Reply frame received for 1\nI0915 11:31:51.537259 2435 log.go:181] (0xc0002176b0) (0xc000c0b720) Create stream\nI0915 11:31:51.537268 2435 log.go:181] (0xc0002176b0) (0xc000c0b720) Stream added, broadcasting: 3\nI0915 11:31:51.538093 2435 log.go:181] (0xc0002176b0) Reply frame received for 3\nI0915 11:31:51.538127 2435 log.go:181] (0xc0002176b0) (0xc000bf5180) Create stream\nI0915 11:31:51.538142 2435 log.go:181] (0xc0002176b0) (0xc000bf5180) Stream added, broadcasting: 5\nI0915 11:31:51.539174 2435 log.go:181] (0xc0002176b0) Reply frame received for 5\nI0915 11:31:51.622079 2435 log.go:181] (0xc0002176b0) Data frame received for 3\nI0915 11:31:51.622115 2435 log.go:181] (0xc000c0b720) (3) Data frame handling\nI0915 11:31:51.622143 2435 log.go:181] (0xc0002176b0) Data frame received for 5\nI0915 11:31:51.622154 2435 log.go:181] (0xc000bf5180) (5) Data frame handling\nI0915 11:31:51.622180 2435 log.go:181] (0xc000bf5180) (5) Data frame sent\nI0915 11:31:51.622191 2435 log.go:181] (0xc0002176b0) Data frame received for 5\nI0915 11:31:51.622198 2435 log.go:181] (0xc000bf5180) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.118.22 80\nConnection to 10.109.118.22 80 port [tcp/http] succeeded!\nI0915 11:31:51.624091 2435 log.go:181] (0xc0002176b0) Data frame received for 1\nI0915 11:31:51.624125 2435 log.go:181] (0xc000bf4820) (1) Data frame handling\nI0915 11:31:51.624233 2435 log.go:181] (0xc000bf4820) (1) Data frame sent\nI0915 11:31:51.624262 2435 log.go:181] (0xc0002176b0) (0xc000bf4820) Stream removed, broadcasting: 1\nI0915 11:31:51.624280 2435 log.go:181] (0xc0002176b0) Go away received\nI0915 11:31:51.624674 2435 log.go:181] (0xc0002176b0) (0xc000bf4820) Stream removed, broadcasting: 1\nI0915 11:31:51.624708 2435 log.go:181] (0xc0002176b0) (0xc000c0b720) Stream removed, broadcasting: 3\nI0915 11:31:51.624730 2435 log.go:181] (0xc0002176b0) (0xc000bf5180) Stream removed, broadcasting: 5\n" Sep 15 11:31:51.629: INFO: stdout: "" Sep 15 11:31:51.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpodzcq46 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31828' Sep 15 11:31:51.852: INFO: stderr: "I0915 11:31:51.770856 2453 log.go:181] (0xc000b2b080) (0xc000b366e0) Create stream\nI0915 11:31:51.770907 2453 log.go:181] (0xc000b2b080) (0xc000b366e0) Stream added, broadcasting: 1\nI0915 11:31:51.777199 2453 log.go:181] (0xc000b2b080) Reply frame received for 1\nI0915 11:31:51.777245 2453 log.go:181] (0xc000b2b080) (0xc000b36000) Create stream\nI0915 11:31:51.777259 2453 log.go:181] (0xc000b2b080) (0xc000b36000) Stream added, broadcasting: 3\nI0915 11:31:51.779485 2453 log.go:181] (0xc000b2b080) Reply frame received for 3\nI0915 11:31:51.779554 2453 log.go:181] (0xc000b2b080) (0xc000ca60a0) Create stream\nI0915 11:31:51.779575 2453 log.go:181] (0xc000b2b080) (0xc000ca60a0) Stream added, broadcasting: 5\nI0915 11:31:51.780558 2453 log.go:181] (0xc000b2b080) Reply frame received for 5\nI0915 11:31:51.845446 2453 log.go:181] (0xc000b2b080) Data frame received for 5\nI0915 11:31:51.845480 2453 log.go:181] (0xc000ca60a0) (5) Data frame handling\nI0915 11:31:51.845492 2453 log.go:181] (0xc000ca60a0) (5) Data frame sent\nI0915 11:31:51.845500 2453 log.go:181] (0xc000b2b080) Data frame received for 5\nI0915 11:31:51.845506 2453 log.go:181] (0xc000ca60a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31828\nConnection to 172.18.0.11 31828 port [tcp/31828] succeeded!\nI0915 11:31:51.845517 2453 log.go:181] (0xc000b2b080) Data frame received for 3\nI0915 11:31:51.845525 2453 log.go:181] (0xc000b36000) (3) Data frame handling\nI0915 11:31:51.847098 2453 log.go:181] (0xc000b2b080) Data frame received for 1\nI0915 11:31:51.847234 2453 log.go:181] (0xc000b366e0) (1) Data frame handling\nI0915 11:31:51.847271 2453 log.go:181] (0xc000b366e0) (1) Data frame sent\nI0915 11:31:51.847287 2453 log.go:181] (0xc000b2b080) (0xc000b366e0) Stream removed, broadcasting: 1\nI0915 11:31:51.847309 2453 log.go:181] (0xc000b2b080) Go away received\nI0915 11:31:51.847710 2453 log.go:181] (0xc000b2b080) (0xc000b366e0) Stream removed, broadcasting: 1\nI0915 11:31:51.847747 2453 log.go:181] (0xc000b2b080) (0xc000b36000) Stream removed, broadcasting: 3\nI0915 11:31:51.847769 2453 log.go:181] (0xc000b2b080) (0xc000ca60a0) Stream removed, broadcasting: 5\n" Sep 15 11:31:51.853: INFO: stdout: "" Sep 15 11:31:51.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpodzcq46 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31828' Sep 15 11:31:52.058: INFO: stderr: "I0915 11:31:51.991468 2472 log.go:181] (0xc000276000) (0xc000c081e0) Create stream\nI0915 11:31:51.991533 2472 log.go:181] (0xc000276000) (0xc000c081e0) Stream added, broadcasting: 1\nI0915 11:31:51.993235 2472 log.go:181] (0xc000276000) Reply frame received for 1\nI0915 11:31:51.993274 2472 log.go:181] (0xc000276000) (0xc000e90000) Create stream\nI0915 11:31:51.993282 2472 log.go:181] (0xc000276000) (0xc000e90000) Stream added, broadcasting: 3\nI0915 11:31:51.994206 2472 log.go:181] (0xc000276000) Reply frame received for 3\nI0915 11:31:51.994238 2472 log.go:181] (0xc000276000) (0xc0009a0280) Create stream\nI0915 11:31:51.994246 2472 log.go:181] (0xc000276000) (0xc0009a0280) Stream added, broadcasting: 5\nI0915 11:31:51.995087 2472 log.go:181] (0xc000276000) Reply frame received for 5\nI0915 11:31:52.051589 2472 log.go:181] (0xc000276000) Data frame received for 5\nI0915 11:31:52.051622 2472 log.go:181] (0xc0009a0280) (5) Data frame handling\nI0915 11:31:52.051649 2472 log.go:181] (0xc0009a0280) (5) Data frame sent\nI0915 11:31:52.051665 2472 log.go:181] (0xc000276000) Data frame received for 5\nI0915 11:31:52.051681 2472 log.go:181] (0xc0009a0280) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31828\nConnection to 172.18.0.12 31828 port [tcp/31828] succeeded!\nI0915 11:31:52.051720 2472 log.go:181] (0xc0009a0280) (5) Data frame sent\nI0915 11:31:52.052128 2472 log.go:181] (0xc000276000) Data frame received for 5\nI0915 11:31:52.052212 2472 log.go:181] (0xc0009a0280) (5) Data frame handling\nI0915 11:31:52.052247 2472 log.go:181] (0xc000276000) Data frame received for 3\nI0915 11:31:52.052269 2472 log.go:181] (0xc000e90000) (3) Data frame handling\nI0915 11:31:52.053655 2472 log.go:181] (0xc000276000) Data frame received for 1\nI0915 11:31:52.053675 2472 log.go:181] (0xc000c081e0) (1) Data frame handling\nI0915 11:31:52.053691 2472 log.go:181] (0xc000c081e0) (1) Data frame sent\nI0915 11:31:52.053708 2472 log.go:181] (0xc000276000) (0xc000c081e0) Stream removed, broadcasting: 1\nI0915 11:31:52.053757 2472 log.go:181] (0xc000276000) Go away received\nI0915 11:31:52.054101 2472 log.go:181] (0xc000276000) (0xc000c081e0) Stream removed, broadcasting: 1\nI0915 11:31:52.054117 2472 log.go:181] (0xc000276000) (0xc000e90000) Stream removed, broadcasting: 3\nI0915 11:31:52.054125 2472 log.go:181] (0xc000276000) (0xc0009a0280) Stream removed, broadcasting: 5\n" Sep 15 11:31:52.058: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:31:52.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4044" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.151 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":201,"skipped":3549,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:31:52.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 15 11:31:52.199: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:31:52.212: INFO: Number of nodes with available pods: 0 Sep 15 11:31:52.212: INFO: Node kali-worker is running more than one daemon pod Sep 15 11:31:53.217: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:31:53.220: INFO: Number of nodes with available pods: 0 Sep 15 11:31:53.220: INFO: Node kali-worker is running more than one daemon pod Sep 15 11:31:54.266: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:31:54.270: INFO: Number of nodes with available pods: 0 Sep 15 11:31:54.270: INFO: Node kali-worker is running more than one daemon pod Sep 15 11:31:55.217: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:31:55.220: INFO: Number of nodes with available pods: 0 Sep 15 11:31:55.220: INFO: Node kali-worker is running more than one daemon pod Sep 15 11:31:56.216: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:31:56.219: INFO: Number of nodes with available pods: 0 Sep 15 11:31:56.219: INFO: Node kali-worker is running more than one daemon pod Sep 15 11:31:57.228: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:31:57.233: INFO: Number of nodes with available pods: 2 Sep 15 11:31:57.233: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 15 11:31:57.335: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:31:57.402: INFO: Number of nodes with available pods: 1 Sep 15 11:31:57.402: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:31:58.482: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:31:58.486: INFO: Number of nodes with available pods: 1 Sep 15 11:31:58.486: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:31:59.408: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:31:59.412: INFO: Number of nodes with available pods: 1 Sep 15 11:31:59.413: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:32:00.432: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:32:00.518: INFO: Number of nodes with available pods: 1 Sep 15 11:32:00.518: INFO: Node kali-worker2 is running more than one daemon pod Sep 15 11:32:01.408: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 15 11:32:01.412: INFO: Number of nodes with available pods: 2 Sep 15 11:32:01.412: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2346, will wait for the garbage collector to delete the pods Sep 15 11:32:01.478: INFO: Deleting DaemonSet.extensions daemon-set took: 7.526653ms Sep 15 11:32:01.578: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.33367ms Sep 15 11:32:13.282: INFO: Number of nodes with available pods: 0 Sep 15 11:32:13.282: INFO: Number of running nodes: 0, number of available pods: 0 Sep 15 11:32:13.285: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2346/daemonsets","resourceVersion":"451787"},"items":null} Sep 15 11:32:13.287: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2346/pods","resourceVersion":"451787"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:32:13.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2346" for this suite. • [SLOW TEST:21.242 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":202,"skipped":3563,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:32:13.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-1b088c07-6a35-4421-ac84-0d97fbe57d96 in namespace container-probe-8129 Sep 15 11:32:17.423: INFO: Started pod busybox-1b088c07-6a35-4421-ac84-0d97fbe57d96 in namespace container-probe-8129 STEP: checking the pod's current state and verifying that restartCount is present Sep 15 11:32:17.426: INFO: Initial restart count of pod busybox-1b088c07-6a35-4421-ac84-0d97fbe57d96 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:36:18.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8129" for this suite. • [SLOW TEST:244.943 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":203,"skipped":3577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:36:18.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Sep 15 11:36:18.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a c6d2c9ec-f15a-4521-8535-a9a0add9432b 452543 0 2020-09-15 11:36:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:36:18.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a c6d2c9ec-f15a-4521-8535-a9a0add9432b 452543 0 2020-09-15 11:36:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Sep 15 11:36:28.390: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a c6d2c9ec-f15a-4521-8535-a9a0add9432b 452583 0 2020-09-15 11:36:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:36:28.390: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a c6d2c9ec-f15a-4521-8535-a9a0add9432b 452583 0 2020-09-15 11:36:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Sep 15 11:36:38.399: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a c6d2c9ec-f15a-4521-8535-a9a0add9432b 452611 0 2020-09-15 11:36:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:36:38.400: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a c6d2c9ec-f15a-4521-8535-a9a0add9432b 452611 0 2020-09-15 11:36:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Sep 15 11:36:48.408: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a c6d2c9ec-f15a-4521-8535-a9a0add9432b 452641 0 2020-09-15 11:36:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:36:48.408: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a c6d2c9ec-f15a-4521-8535-a9a0add9432b 452641 0 2020-09-15 11:36:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Sep 15 11:36:58.416: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-b aff0c0d4-77d2-48ea-98f6-fa77c0257b72 452671 0 2020-09-15 11:36:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:36:58.416: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-b aff0c0d4-77d2-48ea-98f6-fa77c0257b72 452671 0 2020-09-15 11:36:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Sep 15 11:37:08.425: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-b aff0c0d4-77d2-48ea-98f6-fa77c0257b72 452699 0 2020-09-15 11:36:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:37:08.425: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-b aff0c0d4-77d2-48ea-98f6-fa77c0257b72 452699 0 2020-09-15 11:36:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-15 11:36:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:37:18.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4153" for this suite. • [SLOW TEST:60.183 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":204,"skipped":3613,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:37:18.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:37:18.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8745" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":205,"skipped":3625,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:37:18.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:37:18.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3150' Sep 15 11:37:19.023: INFO: stderr: "" Sep 15 11:37:19.023: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Sep 15 11:37:19.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3150' Sep 15 11:37:19.408: INFO: stderr: "" Sep 15 11:37:19.408: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 15 11:37:20.412: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:37:20.413: INFO: Found 0 / 1 Sep 15 11:37:21.413: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:37:21.413: INFO: Found 0 / 1 Sep 15 11:37:22.414: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:37:22.414: INFO: Found 1 / 1 Sep 15 11:37:22.414: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 15 11:37:22.417: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:37:22.417: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 15 11:37:22.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe pod agnhost-primary-vw7jr --namespace=kubectl-3150' Sep 15 11:37:22.543: INFO: stderr: "" Sep 15 11:37:22.543: INFO: stdout: "Name: agnhost-primary-vw7jr\nNamespace: kubectl-3150\nPriority: 0\nNode: kali-worker/172.18.0.11\nStart Time: Tue, 15 Sep 2020 11:37:19 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.176\nIPs:\n IP: 10.244.1.176\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://47facb63da2b1fb82a7399eb247215de4d6bbdaf5cf8eaf0187745634010ce38\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 15 Sep 2020 11:37:21 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-zc9cw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-zc9cw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-zc9cw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-3150/agnhost-primary-vw7jr to kali-worker\n Normal Pulled 2s kubelet, kali-worker Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 1s kubelet, kali-worker Created container agnhost-primary\n Normal Started 1s kubelet, kali-worker Started container agnhost-primary\n" Sep 15 11:37:22.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-3150' Sep 15 11:37:22.677: INFO: stderr: "" Sep 15 11:37:22.677: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3150\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-vw7jr\n" Sep 15 11:37:22.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-3150' Sep 15 11:37:22.789: INFO: stderr: "" Sep 15 11:37:22.789: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3150\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.109.30.100\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.176:6379\nSession Affinity: None\nEvents: \n" Sep 15 11:37:22.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe node kali-control-plane' Sep 15 11:37:22.938: INFO: stderr: "" Sep 15 11:37:22.939: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 13 Sep 2020 16:56:52 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Tue, 15 Sep 2020 11:37:17 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 15 Sep 2020 11:34:03 +0000 Sun, 13 Sep 2020 16:56:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 15 Sep 2020 11:34:03 +0000 Sun, 13 Sep 2020 16:56:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 15 Sep 2020 11:34:03 +0000 Sun, 13 Sep 2020 16:56:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 15 Sep 2020 11:34:03 +0000 Sun, 13 Sep 2020 16:57:42 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.13\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 014def55fc1b49ad9a05fccd634c789f\n System UUID: d1b3cd05-ea3b-4919-8b5e-667c68c9f797\n Boot ID: 6cae8cc9-70fd-486a-9495-a1a7da130c42\n Kernel Version: 4.15.0-115-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-77lvd 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 42h\n kube-system coredns-f9fd979d6-nbdk6 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 42h\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42h\n kube-system kindnet-pmkbq 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 42h\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 42h\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 42h\n kube-system kube-proxy-z8fp7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42h\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 42h\n local-path-storage local-path-provisioner-78776bfc44-pcrjw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Sep 15 11:37:22.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe namespace kubectl-3150' Sep 15 11:37:23.079: INFO: stderr: "" Sep 15 11:37:23.079: INFO: stdout: "Name: kubectl-3150\nLabels: e2e-framework=kubectl\n e2e-run=a90abe08-8b9b-4d48-a9dd-629358f843a9\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:37:23.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3150" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":206,"skipped":3639,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:37:23.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 15 11:37:23.192: INFO: Waiting up to 5m0s for pod "pod-29e13c71-e92b-4223-8579-876ca18e362b" in namespace "emptydir-4727" to be "Succeeded or Failed" Sep 15 11:37:23.195: INFO: Pod "pod-29e13c71-e92b-4223-8579-876ca18e362b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156041ms Sep 15 11:37:25.199: INFO: Pod "pod-29e13c71-e92b-4223-8579-876ca18e362b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006728556s Sep 15 11:37:27.216: INFO: Pod "pod-29e13c71-e92b-4223-8579-876ca18e362b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023463767s STEP: Saw pod success Sep 15 11:37:27.216: INFO: Pod "pod-29e13c71-e92b-4223-8579-876ca18e362b" satisfied condition "Succeeded or Failed" Sep 15 11:37:27.218: INFO: Trying to get logs from node kali-worker2 pod pod-29e13c71-e92b-4223-8579-876ca18e362b container test-container: STEP: delete the pod Sep 15 11:37:27.268: INFO: Waiting for pod pod-29e13c71-e92b-4223-8579-876ca18e362b to disappear Sep 15 11:37:27.285: INFO: Pod pod-29e13c71-e92b-4223-8579-876ca18e362b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:37:27.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4727" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":207,"skipped":3653,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:37:27.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:37:28.332: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:37:30.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766648, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766648, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766648, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766648, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:37:32.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766648, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766648, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766648, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766648, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:37:35.379: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:37:35.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6858" for this suite. STEP: Destroying namespace "webhook-6858-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.291 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":208,"skipped":3662,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:37:35.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-76e7e6ca-e026-41ad-8fe5-9a078e09940e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-76e7e6ca-e026-41ad-8fe5-9a078e09940e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:37:41.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9745" for this suite. • [SLOW TEST:6.242 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3682,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:37:41.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Sep 15 11:37:41.925: INFO: Waiting up to 5m0s for pod "var-expansion-dda879fd-47ed-4547-819d-0a567f18ddd8" in namespace "var-expansion-6650" to be "Succeeded or Failed" Sep 15 11:37:41.933: INFO: Pod "var-expansion-dda879fd-47ed-4547-819d-0a567f18ddd8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.469593ms Sep 15 11:37:44.061: INFO: Pod "var-expansion-dda879fd-47ed-4547-819d-0a567f18ddd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13540516s Sep 15 11:37:46.065: INFO: Pod "var-expansion-dda879fd-47ed-4547-819d-0a567f18ddd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139678892s STEP: Saw pod success Sep 15 11:37:46.065: INFO: Pod "var-expansion-dda879fd-47ed-4547-819d-0a567f18ddd8" satisfied condition "Succeeded or Failed" Sep 15 11:37:46.068: INFO: Trying to get logs from node kali-worker2 pod var-expansion-dda879fd-47ed-4547-819d-0a567f18ddd8 container dapi-container: STEP: delete the pod Sep 15 11:37:46.156: INFO: Waiting for pod var-expansion-dda879fd-47ed-4547-819d-0a567f18ddd8 to disappear Sep 15 11:37:46.159: INFO: Pod var-expansion-dda879fd-47ed-4547-819d-0a567f18ddd8 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:37:46.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6650" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3703,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:37:46.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-d0175466-740b-4c0b-bc4c-d4038940e4ee STEP: Creating a pod to test consume configMaps Sep 15 11:37:46.291: INFO: Waiting up to 5m0s for pod "pod-configmaps-6269e137-262c-4bfd-8d6b-6c64376f4c2e" in namespace "configmap-7340" to be "Succeeded or Failed" Sep 15 11:37:46.327: INFO: Pod "pod-configmaps-6269e137-262c-4bfd-8d6b-6c64376f4c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 36.028596ms Sep 15 11:37:48.332: INFO: Pod "pod-configmaps-6269e137-262c-4bfd-8d6b-6c64376f4c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041243787s Sep 15 11:37:50.379: INFO: Pod "pod-configmaps-6269e137-262c-4bfd-8d6b-6c64376f4c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088043596s Sep 15 11:37:52.384: INFO: Pod "pod-configmaps-6269e137-262c-4bfd-8d6b-6c64376f4c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093696362s STEP: Saw pod success Sep 15 11:37:52.385: INFO: Pod "pod-configmaps-6269e137-262c-4bfd-8d6b-6c64376f4c2e" satisfied condition "Succeeded or Failed" Sep 15 11:37:52.387: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-6269e137-262c-4bfd-8d6b-6c64376f4c2e container configmap-volume-test: STEP: delete the pod Sep 15 11:37:52.420: INFO: Waiting for pod pod-configmaps-6269e137-262c-4bfd-8d6b-6c64376f4c2e to disappear Sep 15 11:37:52.426: INFO: Pod pod-configmaps-6269e137-262c-4bfd-8d6b-6c64376f4c2e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:37:52.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7340" for this suite. • [SLOW TEST:6.303 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":211,"skipped":3721,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:37:52.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Sep 15 11:37:57.161: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9599 pod-service-account-3d410e93-7394-4a6a-8045-a32926129d71 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Sep 15 11:37:57.379: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9599 pod-service-account-3d410e93-7394-4a6a-8045-a32926129d71 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Sep 15 11:37:57.615: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9599 pod-service-account-3d410e93-7394-4a6a-8045-a32926129d71 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:37:57.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9599" for this suite. • [SLOW TEST:5.392 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":212,"skipped":3727,"failed":0} [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:37:57.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 15 11:37:57.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1640' Sep 15 11:37:58.195: INFO: stderr: "" Sep 15 11:37:58.195: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 15 11:37:59.200: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:37:59.200: INFO: Found 0 / 1 Sep 15 11:38:00.215: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:38:00.215: INFO: Found 0 / 1 Sep 15 11:38:01.200: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:38:01.200: INFO: Found 1 / 1 Sep 15 11:38:01.200: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Sep 15 11:38:01.203: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:38:01.203: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 15 11:38:01.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config patch pod agnhost-primary-n4vm7 --namespace=kubectl-1640 -p {"metadata":{"annotations":{"x":"y"}}}' Sep 15 11:38:01.312: INFO: stderr: "" Sep 15 11:38:01.312: INFO: stdout: "pod/agnhost-primary-n4vm7 patched\n" STEP: checking annotations Sep 15 11:38:01.360: INFO: Selector matched 1 pods for map[app:agnhost] Sep 15 11:38:01.360: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:38:01.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1640" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":213,"skipped":3727,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:38:01.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Sep 15 11:38:01.414: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Sep 15 11:38:01.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6587' Sep 15 11:38:01.793: INFO: stderr: "" Sep 15 11:38:01.793: INFO: stdout: "service/agnhost-replica created\n" Sep 15 11:38:01.793: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Sep 15 11:38:01.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6587' Sep 15 11:38:02.108: INFO: stderr: "" Sep 15 11:38:02.108: INFO: stdout: "service/agnhost-primary created\n" Sep 15 11:38:02.108: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 15 11:38:02.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6587' Sep 15 11:38:02.439: INFO: stderr: "" Sep 15 11:38:02.440: INFO: stdout: "service/frontend created\n" Sep 15 11:38:02.440: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Sep 15 11:38:02.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6587' Sep 15 11:38:02.756: INFO: stderr: "" Sep 15 11:38:02.756: INFO: stdout: "deployment.apps/frontend created\n" Sep 15 11:38:02.756: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 15 11:38:02.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6587' Sep 15 11:38:03.145: INFO: stderr: "" Sep 15 11:38:03.145: INFO: stdout: "deployment.apps/agnhost-primary created\n" Sep 15 11:38:03.146: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 15 11:38:03.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6587' Sep 15 11:38:03.498: INFO: stderr: "" Sep 15 11:38:03.499: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Sep 15 11:38:03.499: INFO: Waiting for all frontend pods to be Running. Sep 15 11:38:13.551: INFO: Waiting for frontend to serve content. Sep 15 11:38:13.561: INFO: Trying to add a new entry to the guestbook. Sep 15 11:38:13.572: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Sep 15 11:38:13.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6587' Sep 15 11:38:13.738: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 15 11:38:13.738: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Sep 15 11:38:13.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6587' Sep 15 11:38:13.884: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 15 11:38:13.884: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 15 11:38:13.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6587' Sep 15 11:38:14.004: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 15 11:38:14.004: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 15 11:38:14.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6587' Sep 15 11:38:14.106: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 15 11:38:14.106: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 15 11:38:14.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6587' Sep 15 11:38:14.551: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 15 11:38:14.551: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 15 11:38:14.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6587' Sep 15 11:38:14.767: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 15 11:38:14.767: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:38:14.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6587" for this suite. • [SLOW TEST:13.549 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":214,"skipped":3742,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:38:14.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9630.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9630.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 15 11:38:23.900: INFO: DNS probes using dns-9630/dns-test-50485c4a-6953-4d01-8db2-c3ca59c5bfb9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:38:23.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9630" for this suite. • [SLOW TEST:9.071 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":215,"skipped":3746,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:38:23.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-772f849d-53f5-4763-8c64-a7644f5a972e STEP: Creating a pod to test consume secrets Sep 15 11:38:24.426: INFO: Waiting up to 5m0s for pod "pod-secrets-ab63c155-0f13-4ee1-b460-c4c657e23386" in namespace "secrets-4951" to be "Succeeded or Failed" Sep 15 11:38:24.452: INFO: Pod "pod-secrets-ab63c155-0f13-4ee1-b460-c4c657e23386": Phase="Pending", Reason="", readiness=false. Elapsed: 26.877802ms Sep 15 11:38:26.534: INFO: Pod "pod-secrets-ab63c155-0f13-4ee1-b460-c4c657e23386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108693609s Sep 15 11:38:28.539: INFO: Pod "pod-secrets-ab63c155-0f13-4ee1-b460-c4c657e23386": Phase="Running", Reason="", readiness=true. Elapsed: 4.113209215s Sep 15 11:38:30.543: INFO: Pod "pod-secrets-ab63c155-0f13-4ee1-b460-c4c657e23386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117234488s STEP: Saw pod success Sep 15 11:38:30.543: INFO: Pod "pod-secrets-ab63c155-0f13-4ee1-b460-c4c657e23386" satisfied condition "Succeeded or Failed" Sep 15 11:38:30.546: INFO: Trying to get logs from node kali-worker pod pod-secrets-ab63c155-0f13-4ee1-b460-c4c657e23386 container secret-volume-test: STEP: delete the pod Sep 15 11:38:30.571: INFO: Waiting for pod pod-secrets-ab63c155-0f13-4ee1-b460-c4c657e23386 to disappear Sep 15 11:38:30.623: INFO: Pod pod-secrets-ab63c155-0f13-4ee1-b460-c4c657e23386 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:38:30.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4951" for this suite. • [SLOW TEST:6.646 seconds] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:38:30.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-ebf2eb2b-51fe-4cab-b898-32c62333f38d STEP: Creating a pod to test consume secrets Sep 15 11:38:30.697: INFO: Waiting up to 5m0s for pod "pod-secrets-7399c6a9-34c2-4ba6-9963-75aa93365801" in namespace "secrets-3833" to be "Succeeded or Failed" Sep 15 11:38:30.714: INFO: Pod "pod-secrets-7399c6a9-34c2-4ba6-9963-75aa93365801": Phase="Pending", Reason="", readiness=false. Elapsed: 16.22682ms Sep 15 11:38:32.878: INFO: Pod "pod-secrets-7399c6a9-34c2-4ba6-9963-75aa93365801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180424898s Sep 15 11:38:34.883: INFO: Pod "pod-secrets-7399c6a9-34c2-4ba6-9963-75aa93365801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185314115s STEP: Saw pod success Sep 15 11:38:34.883: INFO: Pod "pod-secrets-7399c6a9-34c2-4ba6-9963-75aa93365801" satisfied condition "Succeeded or Failed" Sep 15 11:38:34.886: INFO: Trying to get logs from node kali-worker pod pod-secrets-7399c6a9-34c2-4ba6-9963-75aa93365801 container secret-volume-test: STEP: delete the pod Sep 15 11:38:34.934: INFO: Waiting for pod pod-secrets-7399c6a9-34c2-4ba6-9963-75aa93365801 to disappear Sep 15 11:38:34.941: INFO: Pod pod-secrets-7399c6a9-34c2-4ba6-9963-75aa93365801 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:38:34.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3833" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":217,"skipped":3784,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:38:34.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 15 11:38:35.027: INFO: Waiting up to 5m0s for pod "pod-f515e7b8-4284-4c4a-90c5-efae9b3bc991" in namespace "emptydir-4123" to be "Succeeded or Failed" Sep 15 11:38:35.042: INFO: Pod "pod-f515e7b8-4284-4c4a-90c5-efae9b3bc991": Phase="Pending", Reason="", readiness=false. Elapsed: 15.452858ms Sep 15 11:38:37.189: INFO: Pod "pod-f515e7b8-4284-4c4a-90c5-efae9b3bc991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162361185s Sep 15 11:38:39.193: INFO: Pod "pod-f515e7b8-4284-4c4a-90c5-efae9b3bc991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166184266s STEP: Saw pod success Sep 15 11:38:39.193: INFO: Pod "pod-f515e7b8-4284-4c4a-90c5-efae9b3bc991" satisfied condition "Succeeded or Failed" Sep 15 11:38:39.196: INFO: Trying to get logs from node kali-worker2 pod pod-f515e7b8-4284-4c4a-90c5-efae9b3bc991 container test-container: STEP: delete the pod Sep 15 11:38:39.270: INFO: Waiting for pod pod-f515e7b8-4284-4c4a-90c5-efae9b3bc991 to disappear Sep 15 11:38:39.273: INFO: Pod pod-f515e7b8-4284-4c4a-90c5-efae9b3bc991 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:38:39.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4123" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":218,"skipped":3805,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:38:39.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-2121 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2121 STEP: Deleting pre-stop pod Sep 15 11:38:52.850: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:38:52.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2121" for this suite. • [SLOW TEST:13.593 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":219,"skipped":3807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:38:52.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 11:38:53.693: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c83c1d40-a76a-436a-971c-e1d64f63e756" in namespace "downward-api-2011" to be "Succeeded or Failed" Sep 15 11:38:53.739: INFO: Pod "downwardapi-volume-c83c1d40-a76a-436a-971c-e1d64f63e756": Phase="Pending", Reason="", readiness=false. Elapsed: 45.960074ms Sep 15 11:38:55.743: INFO: Pod "downwardapi-volume-c83c1d40-a76a-436a-971c-e1d64f63e756": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050694066s Sep 15 11:38:57.749: INFO: Pod "downwardapi-volume-c83c1d40-a76a-436a-971c-e1d64f63e756": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056179408s STEP: Saw pod success Sep 15 11:38:57.749: INFO: Pod "downwardapi-volume-c83c1d40-a76a-436a-971c-e1d64f63e756" satisfied condition "Succeeded or Failed" Sep 15 11:38:57.751: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c83c1d40-a76a-436a-971c-e1d64f63e756 container client-container: STEP: delete the pod Sep 15 11:38:57.815: INFO: Waiting for pod downwardapi-volume-c83c1d40-a76a-436a-971c-e1d64f63e756 to disappear Sep 15 11:38:57.824: INFO: Pod downwardapi-volume-c83c1d40-a76a-436a-971c-e1d64f63e756 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:38:57.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2011" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":220,"skipped":3834,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:38:57.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 15 11:39:02.422: INFO: Successfully updated pod "annotationupdate631cb567-53a4-4330-a66a-01093dcaf6c4" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:39:04.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3321" for this suite. • [SLOW TEST:6.629 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3847,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:39:04.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:39:10.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4271" for this suite. STEP: Destroying namespace "nsdeletetest-5300" for this suite. Sep 15 11:39:10.917: INFO: Namespace nsdeletetest-5300 was already deleted STEP: Destroying namespace "nsdeletetest-4299" for this suite. • [SLOW TEST:6.494 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":222,"skipped":3848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:39:10.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:39:15.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1094" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3885,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:39:15.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:39:22.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4452" for this suite. • [SLOW TEST:7.128 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":224,"skipped":3888,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:39:22.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:39:23.624: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:39:25.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766763, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766763, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766763, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766763, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:39:28.667: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:39:40.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3899" for this suite. STEP: Destroying namespace "webhook-3899-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.306 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":225,"skipped":3901,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:39:40.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-76599451-caf0-4bf2-aab9-e1f3847a7a20 STEP: Creating configMap with name cm-test-opt-upd-ae017285-d29b-4263-93fb-ed4de7ff62c1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-76599451-caf0-4bf2-aab9-e1f3847a7a20 STEP: Updating configmap cm-test-opt-upd-ae017285-d29b-4263-93fb-ed4de7ff62c1 STEP: Creating configMap with name cm-test-opt-create-85d1f5f3-69e1-4c5a-bd9a-d8a1896a2bac STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:39:49.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9126" for this suite. • [SLOW TEST:8.208 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":226,"skipped":3916,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:39:49.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 15 11:39:59.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 15 11:39:59.315: INFO: Pod pod-with-poststart-exec-hook still exists Sep 15 11:40:01.315: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 15 11:40:01.319: INFO: Pod pod-with-poststart-exec-hook still exists Sep 15 11:40:03.315: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 15 11:40:03.320: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:40:03.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1182" for this suite. • [SLOW TEST:14.176 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":227,"skipped":3923,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:40:03.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:40:03.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5127" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":228,"skipped":3945,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:40:03.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:40:03.565: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:40:10.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8233" for this suite. • [SLOW TEST:6.547 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":229,"skipped":3957,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:40:10.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4003.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4003.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4003.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4003.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.97.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.97.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.97.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.97.114_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4003.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4003.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4003.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4003.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4003.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.97.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.97.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.97.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.97.114_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 15 11:40:18.362: INFO: Unable to read wheezy_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:18.365: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:18.368: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:18.371: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:18.392: INFO: Unable to read jessie_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:18.396: INFO: Unable to read jessie_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:18.399: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:18.402: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:18.421: INFO: Lookups using dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd failed for: [wheezy_udp@dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_udp@dns-test-service.dns-4003.svc.cluster.local jessie_tcp@dns-test-service.dns-4003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local] Sep 15 11:40:23.427: INFO: Unable to read wheezy_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:23.430: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:23.433: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:23.435: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:23.456: INFO: Unable to read jessie_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:23.458: INFO: Unable to read jessie_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:23.460: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:23.462: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:23.476: INFO: Lookups using dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd failed for: [wheezy_udp@dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_udp@dns-test-service.dns-4003.svc.cluster.local jessie_tcp@dns-test-service.dns-4003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local] Sep 15 11:40:28.427: INFO: Unable to read wheezy_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:28.431: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:28.435: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:28.438: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:28.461: INFO: Unable to read jessie_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:28.464: INFO: Unable to read jessie_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:28.468: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:28.471: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:28.489: INFO: Lookups using dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd failed for: [wheezy_udp@dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_udp@dns-test-service.dns-4003.svc.cluster.local jessie_tcp@dns-test-service.dns-4003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local] Sep 15 11:40:33.426: INFO: Unable to read wheezy_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:33.430: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:33.433: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:33.436: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:33.457: INFO: Unable to read jessie_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:33.460: INFO: Unable to read jessie_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:33.463: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:33.466: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:33.486: INFO: Lookups using dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd failed for: [wheezy_udp@dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_udp@dns-test-service.dns-4003.svc.cluster.local jessie_tcp@dns-test-service.dns-4003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local] Sep 15 11:40:38.426: INFO: Unable to read wheezy_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:38.429: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:38.432: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:38.435: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:38.458: INFO: Unable to read jessie_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:38.461: INFO: Unable to read jessie_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:38.463: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:38.466: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:38.484: INFO: Lookups using dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd failed for: [wheezy_udp@dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_udp@dns-test-service.dns-4003.svc.cluster.local jessie_tcp@dns-test-service.dns-4003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local] Sep 15 11:40:43.427: INFO: Unable to read wheezy_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:43.430: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:43.434: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:43.438: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:43.461: INFO: Unable to read jessie_udp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:43.464: INFO: Unable to read jessie_tcp@dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:43.467: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:43.469: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local from pod dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd: the server could not find the requested resource (get pods dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd) Sep 15 11:40:43.488: INFO: Lookups using dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd failed for: [wheezy_udp@dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@dns-test-service.dns-4003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_udp@dns-test-service.dns-4003.svc.cluster.local jessie_tcp@dns-test-service.dns-4003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4003.svc.cluster.local] Sep 15 11:40:48.487: INFO: DNS probes using dns-4003/dns-test-27b8db89-d158-479c-a8b0-9bb7154f72fd succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:40:49.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4003" for this suite. • [SLOW TEST:39.277 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":230,"skipped":3963,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:40:49.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:40:50.371: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Sep 15 11:40:52.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766850, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766850, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766850, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766850, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:40:54.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766850, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766850, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766850, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766850, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:40:57.414: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:40:57.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4810" for this suite. STEP: Destroying namespace "webhook-4810-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.426 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":231,"skipped":3977,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:40:57.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 15 11:40:57.889: INFO: Waiting up to 1m0s for all nodes to be ready Sep 15 11:41:57.908: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 15 11:41:57.925: INFO: Created pod: pod0-sched-preemption-low-priority Sep 15 11:41:57.965: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:42:18.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1170" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:80.449 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":232,"skipped":3982,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:42:18.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 15 11:42:18.598: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:42:25.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8914" for this suite. • [SLOW TEST:7.745 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":233,"skipped":3985,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:42:25.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 11:42:26.310: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd121be3-37e7-4807-b554-014e00722fc6" in namespace "projected-4129" to be "Succeeded or Failed" Sep 15 11:42:26.353: INFO: Pod "downwardapi-volume-dd121be3-37e7-4807-b554-014e00722fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.982464ms Sep 15 11:42:28.402: INFO: Pod "downwardapi-volume-dd121be3-37e7-4807-b554-014e00722fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092832083s Sep 15 11:42:30.407: INFO: Pod "downwardapi-volume-dd121be3-37e7-4807-b554-014e00722fc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097763759s STEP: Saw pod success Sep 15 11:42:30.407: INFO: Pod "downwardapi-volume-dd121be3-37e7-4807-b554-014e00722fc6" satisfied condition "Succeeded or Failed" Sep 15 11:42:30.411: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-dd121be3-37e7-4807-b554-014e00722fc6 container client-container: STEP: delete the pod Sep 15 11:42:30.456: INFO: Waiting for pod downwardapi-volume-dd121be3-37e7-4807-b554-014e00722fc6 to disappear Sep 15 11:42:30.503: INFO: Pod downwardapi-volume-dd121be3-37e7-4807-b554-014e00722fc6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:42:30.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4129" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":234,"skipped":4026,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:42:30.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:42:30.559: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Sep 15 11:42:30.565: INFO: Pod name sample-pod: Found 0 pods out of 1 Sep 15 11:42:35.574: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 15 11:42:35.574: INFO: Creating deployment "test-rolling-update-deployment" Sep 15 11:42:35.599: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Sep 15 11:42:35.609: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Sep 15 11:42:37.777: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Sep 15 11:42:37.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766955, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766955, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766955, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735766955, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:42:39.814: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 15 11:42:39.821: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9333 /apis/apps/v1/namespaces/deployment-9333/deployments/test-rolling-update-deployment cf15ae9c-58c6-4c7c-851a-90d95d1e07ec 454995 1 2020-09-15 11:42:35 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-09-15 11:42:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-15 11:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003d25fb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-15 11:42:35 +0000 UTC,LastTransitionTime:2020-09-15 11:42:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-09-15 11:42:38 +0000 UTC,LastTransitionTime:2020-09-15 11:42:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 15 11:42:39.823: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-9333 /apis/apps/v1/namespaces/deployment-9333/replicasets/test-rolling-update-deployment-c4cb8d6d9 a62d7d86-c9d8-418f-a18e-b93ff39fd5f8 454984 1 2020-09-15 11:42:35 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment cf15ae9c-58c6-4c7c-851a-90d95d1e07ec 0xc004240500 0xc004240501}] [] [{kube-controller-manager Update apps/v1 2020-09-15 11:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf15ae9c-58c6-4c7c-851a-90d95d1e07ec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004240578 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 15 11:42:39.823: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Sep 15 11:42:39.824: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9333 /apis/apps/v1/namespaces/deployment-9333/replicasets/test-rolling-update-controller db1927bc-33e6-48ca-9fe1-afe58ec42fac 454994 2 2020-09-15 11:42:30 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment cf15ae9c-58c6-4c7c-851a-90d95d1e07ec 0xc0042403f7 0xc0042403f8}] [] [{e2e.test Update apps/v1 2020-09-15 11:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-15 11:42:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf15ae9c-58c6-4c7c-851a-90d95d1e07ec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004240498 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 15 11:42:39.827: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-5nt6g" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-5nt6g test-rolling-update-deployment-c4cb8d6d9- deployment-9333 /api/v1/namespaces/deployment-9333/pods/test-rolling-update-deployment-c4cb8d6d9-5nt6g ff1d4119-bcb2-45a2-830c-8f5f16ca1ed3 454983 0 2020-09-15 11:42:35 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 a62d7d86-c9d8-418f-a18e-b93ff39fd5f8 0xc004240a30 0xc004240a31}] [] [{kube-controller-manager Update v1 2020-09-15 11:42:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a62d7d86-c9d8-418f-a18e-b93ff39fd5f8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 11:42:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zdf9p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zdf9p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zdf9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 11:42:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 11:42:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 11:42:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 11:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.193,StartTime:2020-09-15 11:42:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 11:42:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://278118782c4d7a4779000815b93c4d9bf4110c9c002ae151409a33b0ebbe6f90,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:42:39.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9333" for this suite. • [SLOW TEST:9.323 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":235,"skipped":4032,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:42:39.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-735cb7f5-2a0e-4af4-940b-22b16b92ef13 STEP: Creating a pod to test consume secrets Sep 15 11:42:39.945: INFO: Waiting up to 5m0s for pod "pod-secrets-8dc695f0-402f-48ad-af0e-e26f0faee907" in namespace "secrets-5970" to be "Succeeded or Failed" Sep 15 11:42:39.964: INFO: Pod "pod-secrets-8dc695f0-402f-48ad-af0e-e26f0faee907": Phase="Pending", Reason="", readiness=false. Elapsed: 18.507328ms Sep 15 11:42:41.968: INFO: Pod "pod-secrets-8dc695f0-402f-48ad-af0e-e26f0faee907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022565225s Sep 15 11:42:43.972: INFO: Pod "pod-secrets-8dc695f0-402f-48ad-af0e-e26f0faee907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026614405s STEP: Saw pod success Sep 15 11:42:43.972: INFO: Pod "pod-secrets-8dc695f0-402f-48ad-af0e-e26f0faee907" satisfied condition "Succeeded or Failed" Sep 15 11:42:43.975: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-8dc695f0-402f-48ad-af0e-e26f0faee907 container secret-volume-test: STEP: delete the pod Sep 15 11:42:44.062: INFO: Waiting for pod pod-secrets-8dc695f0-402f-48ad-af0e-e26f0faee907 to disappear Sep 15 11:42:44.280: INFO: Pod pod-secrets-8dc695f0-402f-48ad-af0e-e26f0faee907 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:42:44.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5970" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":236,"skipped":4038,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:42:44.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Sep 15 11:42:44.539: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8960 /api/v1/namespaces/watch-8960/configmaps/e2e-watch-test-resource-version 49e09e2b-a4cf-4dbc-a4eb-6f9ef889c85a 455036 0 2020-09-15 11:42:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-15 11:42:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 15 11:42:44.539: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8960 /api/v1/namespaces/watch-8960/configmaps/e2e-watch-test-resource-version 49e09e2b-a4cf-4dbc-a4eb-6f9ef889c85a 455037 0 2020-09-15 11:42:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-15 11:42:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:42:44.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8960" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":237,"skipped":4044,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:42:44.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:42:44.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Sep 15 11:42:45.174: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-15T11:42:45Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-15T11:42:45Z]] name:name1 resourceVersion:455066 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a43dbe6e-a248-4445-800c-c7fac646a762] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Sep 15 11:42:55.182: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-15T11:42:55Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-15T11:42:55Z]] name:name2 resourceVersion:455115 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:dccaf9df-1567-4533-b299-5df54854a2bc] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Sep 15 11:43:05.193: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-15T11:42:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-15T11:43:05Z]] name:name1 resourceVersion:455145 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a43dbe6e-a248-4445-800c-c7fac646a762] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Sep 15 11:43:15.201: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-15T11:42:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-15T11:43:15Z]] name:name2 resourceVersion:455175 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:dccaf9df-1567-4533-b299-5df54854a2bc] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Sep 15 11:43:25.210: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-15T11:42:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-15T11:43:05Z]] name:name1 resourceVersion:455205 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a43dbe6e-a248-4445-800c-c7fac646a762] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Sep 15 11:43:35.231: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-15T11:42:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-15T11:43:15Z]] name:name2 resourceVersion:455235 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:dccaf9df-1567-4533-b299-5df54854a2bc] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:43:45.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-940" for this suite. • [SLOW TEST:61.206 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":238,"skipped":4048,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:43:45.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 15 11:43:45.849: INFO: Waiting up to 5m0s for pod "downward-api-1c72f076-c627-4515-bdfa-848e4aeee279" in namespace "downward-api-9406" to be "Succeeded or Failed" Sep 15 11:43:45.854: INFO: Pod "downward-api-1c72f076-c627-4515-bdfa-848e4aeee279": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48832ms Sep 15 11:43:47.858: INFO: Pod "downward-api-1c72f076-c627-4515-bdfa-848e4aeee279": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008947543s Sep 15 11:43:49.863: INFO: Pod "downward-api-1c72f076-c627-4515-bdfa-848e4aeee279": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013769149s STEP: Saw pod success Sep 15 11:43:49.863: INFO: Pod "downward-api-1c72f076-c627-4515-bdfa-848e4aeee279" satisfied condition "Succeeded or Failed" Sep 15 11:43:49.866: INFO: Trying to get logs from node kali-worker2 pod downward-api-1c72f076-c627-4515-bdfa-848e4aeee279 container dapi-container: STEP: delete the pod Sep 15 11:43:49.903: INFO: Waiting for pod downward-api-1c72f076-c627-4515-bdfa-848e4aeee279 to disappear Sep 15 11:43:49.913: INFO: Pod downward-api-1c72f076-c627-4515-bdfa-848e4aeee279 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:43:49.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9406" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":239,"skipped":4056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:43:49.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8198 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8198 STEP: Creating statefulset with conflicting port in namespace statefulset-8198 STEP: Waiting until pod test-pod will start running in namespace statefulset-8198 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8198 Sep 15 11:43:54.083: INFO: Observed stateful pod in namespace: statefulset-8198, name: ss-0, uid: cf5ff9b5-0d94-446c-a3b3-97fea0770ead, status phase: Pending. Waiting for statefulset controller to delete. Sep 15 11:43:54.089: INFO: Observed stateful pod in namespace: statefulset-8198, name: ss-0, uid: cf5ff9b5-0d94-446c-a3b3-97fea0770ead, status phase: Failed. Waiting for statefulset controller to delete. Sep 15 11:43:54.144: INFO: Observed stateful pod in namespace: statefulset-8198, name: ss-0, uid: cf5ff9b5-0d94-446c-a3b3-97fea0770ead, status phase: Failed. Waiting for statefulset controller to delete. Sep 15 11:43:54.147: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8198 STEP: Removing pod with conflicting port in namespace statefulset-8198 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8198 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 15 11:43:58.325: INFO: Deleting all statefulset in ns statefulset-8198 Sep 15 11:43:58.328: INFO: Scaling statefulset ss to 0 Sep 15 11:44:18.358: INFO: Waiting for statefulset status.replicas updated to 0 Sep 15 11:44:18.361: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:44:18.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8198" for this suite. • [SLOW TEST:28.486 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":240,"skipped":4085,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:44:18.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:44:19.185: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:44:21.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767059, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767059, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767059, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767059, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:44:24.296: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:44:24.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4764-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:44:25.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1264" for this suite. STEP: Destroying namespace "webhook-1264-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.134 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":241,"skipped":4100,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:44:25.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 15 11:44:25.728: INFO: Waiting up to 1m0s for all nodes to be ready Sep 15 11:45:25.748: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:45:25.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Sep 15 11:45:29.922: INFO: found a healthy node: kali-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:45:50.130: INFO: pods created so far: [1 1 1] Sep 15 11:45:50.130: INFO: length of pods created so far: 3 Sep 15 11:46:08.164: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:46:15.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-3195" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:46:15.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7752" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:109.894 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":242,"skipped":4109,"failed":0} SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:46:15.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Sep 15 11:46:28.010: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:28.010: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:28.046717 7 log.go:181] (0xc00041b080) (0xc0010bed20) Create stream I0915 11:46:28.046758 7 log.go:181] (0xc00041b080) (0xc0010bed20) Stream added, broadcasting: 1 I0915 11:46:28.050415 7 log.go:181] (0xc00041b080) Reply frame received for 1 I0915 11:46:28.050461 7 log.go:181] (0xc00041b080) (0xc0010bedc0) Create stream I0915 11:46:28.050475 7 log.go:181] (0xc00041b080) (0xc0010bedc0) Stream added, broadcasting: 3 I0915 11:46:28.051371 7 log.go:181] (0xc00041b080) Reply frame received for 3 I0915 11:46:28.051415 7 log.go:181] (0xc00041b080) (0xc0040966e0) Create stream I0915 11:46:28.051428 7 log.go:181] (0xc00041b080) (0xc0040966e0) Stream added, broadcasting: 5 I0915 11:46:28.052311 7 log.go:181] (0xc00041b080) Reply frame received for 5 I0915 11:46:28.140845 7 log.go:181] (0xc00041b080) Data frame received for 3 I0915 11:46:28.140878 7 log.go:181] (0xc0010bedc0) (3) Data frame handling I0915 11:46:28.140892 7 log.go:181] (0xc0010bedc0) (3) Data frame sent I0915 11:46:28.140903 7 log.go:181] (0xc00041b080) Data frame received for 3 I0915 11:46:28.140913 7 log.go:181] (0xc0010bedc0) (3) Data frame handling I0915 11:46:28.140965 7 log.go:181] (0xc00041b080) Data frame received for 5 I0915 11:46:28.140993 7 log.go:181] (0xc0040966e0) (5) Data frame handling I0915 11:46:28.142599 7 log.go:181] (0xc00041b080) Data frame received for 1 I0915 11:46:28.142637 7 log.go:181] (0xc0010bed20) (1) Data frame handling I0915 11:46:28.142669 7 log.go:181] (0xc0010bed20) (1) Data frame sent I0915 11:46:28.142699 7 log.go:181] (0xc00041b080) (0xc0010bed20) Stream removed, broadcasting: 1 I0915 11:46:28.142739 7 log.go:181] (0xc00041b080) Go away received I0915 11:46:28.142838 7 log.go:181] (0xc00041b080) (0xc0010bed20) Stream removed, broadcasting: 1 I0915 11:46:28.142918 7 log.go:181] (0xc00041b080) (0xc0010bedc0) Stream removed, broadcasting: 3 I0915 11:46:28.142954 7 log.go:181] (0xc00041b080) (0xc0040966e0) Stream removed, broadcasting: 5 Sep 15 11:46:28.142: INFO: Exec stderr: "" Sep 15 11:46:28.143: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:28.143: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:28.179454 7 log.go:181] (0xc00088ab00) (0xc0042a6b40) Create stream I0915 11:46:28.179483 7 log.go:181] (0xc00088ab00) (0xc0042a6b40) Stream added, broadcasting: 1 I0915 11:46:28.181320 7 log.go:181] (0xc00088ab00) Reply frame received for 1 I0915 11:46:28.181390 7 log.go:181] (0xc00088ab00) (0xc0014ac000) Create stream I0915 11:46:28.181422 7 log.go:181] (0xc00088ab00) (0xc0014ac000) Stream added, broadcasting: 3 I0915 11:46:28.182234 7 log.go:181] (0xc00088ab00) Reply frame received for 3 I0915 11:46:28.182254 7 log.go:181] (0xc00088ab00) (0xc003bc1860) Create stream I0915 11:46:28.182259 7 log.go:181] (0xc00088ab00) (0xc003bc1860) Stream added, broadcasting: 5 I0915 11:46:28.182940 7 log.go:181] (0xc00088ab00) Reply frame received for 5 I0915 11:46:28.252910 7 log.go:181] (0xc00088ab00) Data frame received for 3 I0915 11:46:28.252954 7 log.go:181] (0xc0014ac000) (3) Data frame handling I0915 11:46:28.252980 7 log.go:181] (0xc0014ac000) (3) Data frame sent I0915 11:46:28.253051 7 log.go:181] (0xc00088ab00) Data frame received for 3 I0915 11:46:28.253061 7 log.go:181] (0xc0014ac000) (3) Data frame handling I0915 11:46:28.253077 7 log.go:181] (0xc00088ab00) Data frame received for 5 I0915 11:46:28.253085 7 log.go:181] (0xc003bc1860) (5) Data frame handling I0915 11:46:28.254672 7 log.go:181] (0xc00088ab00) Data frame received for 1 I0915 11:46:28.254701 7 log.go:181] (0xc0042a6b40) (1) Data frame handling I0915 11:46:28.254728 7 log.go:181] (0xc0042a6b40) (1) Data frame sent I0915 11:46:28.254743 7 log.go:181] (0xc00088ab00) (0xc0042a6b40) Stream removed, broadcasting: 1 I0915 11:46:28.254764 7 log.go:181] (0xc00088ab00) Go away received I0915 11:46:28.254900 7 log.go:181] (0xc00088ab00) (0xc0042a6b40) Stream removed, broadcasting: 1 I0915 11:46:28.254931 7 log.go:181] (0xc00088ab00) (0xc0014ac000) Stream removed, broadcasting: 3 I0915 11:46:28.254943 7 log.go:181] (0xc00088ab00) (0xc003bc1860) Stream removed, broadcasting: 5 Sep 15 11:46:28.254: INFO: Exec stderr: "" Sep 15 11:46:28.254: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:28.255: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:28.293073 7 log.go:181] (0xc002b8e6e0) (0xc004096b40) Create stream I0915 11:46:28.293109 7 log.go:181] (0xc002b8e6e0) (0xc004096b40) Stream added, broadcasting: 1 I0915 11:46:28.295393 7 log.go:181] (0xc002b8e6e0) Reply frame received for 1 I0915 11:46:28.295455 7 log.go:181] (0xc002b8e6e0) (0xc004096be0) Create stream I0915 11:46:28.295483 7 log.go:181] (0xc002b8e6e0) (0xc004096be0) Stream added, broadcasting: 3 I0915 11:46:28.296548 7 log.go:181] (0xc002b8e6e0) Reply frame received for 3 I0915 11:46:28.296592 7 log.go:181] (0xc002b8e6e0) (0xc003bc1900) Create stream I0915 11:46:28.296605 7 log.go:181] (0xc002b8e6e0) (0xc003bc1900) Stream added, broadcasting: 5 I0915 11:46:28.297569 7 log.go:181] (0xc002b8e6e0) Reply frame received for 5 I0915 11:46:28.363876 7 log.go:181] (0xc002b8e6e0) Data frame received for 5 I0915 11:46:28.363931 7 log.go:181] (0xc003bc1900) (5) Data frame handling I0915 11:46:28.363960 7 log.go:181] (0xc002b8e6e0) Data frame received for 3 I0915 11:46:28.363974 7 log.go:181] (0xc004096be0) (3) Data frame handling I0915 11:46:28.363992 7 log.go:181] (0xc004096be0) (3) Data frame sent I0915 11:46:28.364006 7 log.go:181] (0xc002b8e6e0) Data frame received for 3 I0915 11:46:28.364020 7 log.go:181] (0xc004096be0) (3) Data frame handling I0915 11:46:28.365911 7 log.go:181] (0xc002b8e6e0) Data frame received for 1 I0915 11:46:28.365992 7 log.go:181] (0xc004096b40) (1) Data frame handling I0915 11:46:28.366013 7 log.go:181] (0xc004096b40) (1) Data frame sent I0915 11:46:28.366025 7 log.go:181] (0xc002b8e6e0) (0xc004096b40) Stream removed, broadcasting: 1 I0915 11:46:28.366051 7 log.go:181] (0xc002b8e6e0) Go away received I0915 11:46:28.366155 7 log.go:181] (0xc002b8e6e0) (0xc004096b40) Stream removed, broadcasting: 1 I0915 11:46:28.366181 7 log.go:181] (0xc002b8e6e0) (0xc004096be0) Stream removed, broadcasting: 3 I0915 11:46:28.366193 7 log.go:181] (0xc002b8e6e0) (0xc003bc1900) Stream removed, broadcasting: 5 Sep 15 11:46:28.366: INFO: Exec stderr: "" Sep 15 11:46:28.366: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:28.366: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:28.395424 7 log.go:181] (0xc00041b970) (0xc0010bf360) Create stream I0915 11:46:28.395464 7 log.go:181] (0xc00041b970) (0xc0010bf360) Stream added, broadcasting: 1 I0915 11:46:28.397714 7 log.go:181] (0xc00041b970) Reply frame received for 1 I0915 11:46:28.397752 7 log.go:181] (0xc00041b970) (0xc0014ac0a0) Create stream I0915 11:46:28.397765 7 log.go:181] (0xc00041b970) (0xc0014ac0a0) Stream added, broadcasting: 3 I0915 11:46:28.398684 7 log.go:181] (0xc00041b970) Reply frame received for 3 I0915 11:46:28.398713 7 log.go:181] (0xc00041b970) (0xc0042a6be0) Create stream I0915 11:46:28.398724 7 log.go:181] (0xc00041b970) (0xc0042a6be0) Stream added, broadcasting: 5 I0915 11:46:28.399478 7 log.go:181] (0xc00041b970) Reply frame received for 5 I0915 11:46:28.475381 7 log.go:181] (0xc00041b970) Data frame received for 3 I0915 11:46:28.475401 7 log.go:181] (0xc0014ac0a0) (3) Data frame handling I0915 11:46:28.475409 7 log.go:181] (0xc0014ac0a0) (3) Data frame sent I0915 11:46:28.475413 7 log.go:181] (0xc00041b970) Data frame received for 3 I0915 11:46:28.475419 7 log.go:181] (0xc0014ac0a0) (3) Data frame handling I0915 11:46:28.475545 7 log.go:181] (0xc00041b970) Data frame received for 5 I0915 11:46:28.475571 7 log.go:181] (0xc0042a6be0) (5) Data frame handling I0915 11:46:28.477276 7 log.go:181] (0xc00041b970) Data frame received for 1 I0915 11:46:28.477288 7 log.go:181] (0xc0010bf360) (1) Data frame handling I0915 11:46:28.477300 7 log.go:181] (0xc0010bf360) (1) Data frame sent I0915 11:46:28.477309 7 log.go:181] (0xc00041b970) (0xc0010bf360) Stream removed, broadcasting: 1 I0915 11:46:28.477408 7 log.go:181] (0xc00041b970) Go away received I0915 11:46:28.477482 7 log.go:181] (0xc00041b970) (0xc0010bf360) Stream removed, broadcasting: 1 I0915 11:46:28.477520 7 log.go:181] (0xc00041b970) (0xc0014ac0a0) Stream removed, broadcasting: 3 I0915 11:46:28.477535 7 log.go:181] (0xc00041b970) (0xc0042a6be0) Stream removed, broadcasting: 5 Sep 15 11:46:28.477: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Sep 15 11:46:28.477: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:28.477: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:28.526057 7 log.go:181] (0xc00088b1e0) (0xc0042a6e60) Create stream I0915 11:46:28.526086 7 log.go:181] (0xc00088b1e0) (0xc0042a6e60) Stream added, broadcasting: 1 I0915 11:46:28.529297 7 log.go:181] (0xc00088b1e0) Reply frame received for 1 I0915 11:46:28.529384 7 log.go:181] (0xc00088b1e0) (0xc0010bf400) Create stream I0915 11:46:28.529444 7 log.go:181] (0xc00088b1e0) (0xc0010bf400) Stream added, broadcasting: 3 I0915 11:46:28.531543 7 log.go:181] (0xc00088b1e0) Reply frame received for 3 I0915 11:46:28.531581 7 log.go:181] (0xc00088b1e0) (0xc0010bf4a0) Create stream I0915 11:46:28.531594 7 log.go:181] (0xc00088b1e0) (0xc0010bf4a0) Stream added, broadcasting: 5 I0915 11:46:28.532460 7 log.go:181] (0xc00088b1e0) Reply frame received for 5 I0915 11:46:28.605430 7 log.go:181] (0xc00088b1e0) Data frame received for 5 I0915 11:46:28.605488 7 log.go:181] (0xc0010bf4a0) (5) Data frame handling I0915 11:46:28.605525 7 log.go:181] (0xc00088b1e0) Data frame received for 3 I0915 11:46:28.605546 7 log.go:181] (0xc0010bf400) (3) Data frame handling I0915 11:46:28.605573 7 log.go:181] (0xc0010bf400) (3) Data frame sent I0915 11:46:28.605597 7 log.go:181] (0xc00088b1e0) Data frame received for 3 I0915 11:46:28.605618 7 log.go:181] (0xc0010bf400) (3) Data frame handling I0915 11:46:28.607113 7 log.go:181] (0xc00088b1e0) Data frame received for 1 I0915 11:46:28.607146 7 log.go:181] (0xc0042a6e60) (1) Data frame handling I0915 11:46:28.607158 7 log.go:181] (0xc0042a6e60) (1) Data frame sent I0915 11:46:28.607168 7 log.go:181] (0xc00088b1e0) (0xc0042a6e60) Stream removed, broadcasting: 1 I0915 11:46:28.607182 7 log.go:181] (0xc00088b1e0) Go away received I0915 11:46:28.607262 7 log.go:181] (0xc00088b1e0) (0xc0042a6e60) Stream removed, broadcasting: 1 I0915 11:46:28.607345 7 log.go:181] (0xc00088b1e0) (0xc0010bf400) Stream removed, broadcasting: 3 I0915 11:46:28.607366 7 log.go:181] (0xc00088b1e0) (0xc0010bf4a0) Stream removed, broadcasting: 5 Sep 15 11:46:28.607: INFO: Exec stderr: "" Sep 15 11:46:28.607: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:28.607: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:28.644550 7 log.go:181] (0xc002f9a370) (0xc0010bf9a0) Create stream I0915 11:46:28.644575 7 log.go:181] (0xc002f9a370) (0xc0010bf9a0) Stream added, broadcasting: 1 I0915 11:46:28.647043 7 log.go:181] (0xc002f9a370) Reply frame received for 1 I0915 11:46:28.647086 7 log.go:181] (0xc002f9a370) (0xc003bc19a0) Create stream I0915 11:46:28.647101 7 log.go:181] (0xc002f9a370) (0xc003bc19a0) Stream added, broadcasting: 3 I0915 11:46:28.648301 7 log.go:181] (0xc002f9a370) Reply frame received for 3 I0915 11:46:28.648347 7 log.go:181] (0xc002f9a370) (0xc003bc1a40) Create stream I0915 11:46:28.648364 7 log.go:181] (0xc002f9a370) (0xc003bc1a40) Stream added, broadcasting: 5 I0915 11:46:28.649530 7 log.go:181] (0xc002f9a370) Reply frame received for 5 I0915 11:46:28.726291 7 log.go:181] (0xc002f9a370) Data frame received for 3 I0915 11:46:28.726325 7 log.go:181] (0xc003bc19a0) (3) Data frame handling I0915 11:46:28.726336 7 log.go:181] (0xc003bc19a0) (3) Data frame sent I0915 11:46:28.726343 7 log.go:181] (0xc002f9a370) Data frame received for 3 I0915 11:46:28.726353 7 log.go:181] (0xc003bc19a0) (3) Data frame handling I0915 11:46:28.726378 7 log.go:181] (0xc002f9a370) Data frame received for 5 I0915 11:46:28.726390 7 log.go:181] (0xc003bc1a40) (5) Data frame handling I0915 11:46:28.729990 7 log.go:181] (0xc002f9a370) Data frame received for 1 I0915 11:46:28.730007 7 log.go:181] (0xc0010bf9a0) (1) Data frame handling I0915 11:46:28.730016 7 log.go:181] (0xc0010bf9a0) (1) Data frame sent I0915 11:46:28.730029 7 log.go:181] (0xc002f9a370) (0xc0010bf9a0) Stream removed, broadcasting: 1 I0915 11:46:28.730041 7 log.go:181] (0xc002f9a370) Go away received I0915 11:46:28.730125 7 log.go:181] (0xc002f9a370) (0xc0010bf9a0) Stream removed, broadcasting: 1 I0915 11:46:28.730140 7 log.go:181] (0xc002f9a370) (0xc003bc19a0) Stream removed, broadcasting: 3 I0915 11:46:28.730150 7 log.go:181] (0xc002f9a370) (0xc003bc1a40) Stream removed, broadcasting: 5 Sep 15 11:46:28.730: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Sep 15 11:46:28.730: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:28.730: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:28.752941 7 log.go:181] (0xc0055e44d0) (0xc003bc1d60) Create stream I0915 11:46:28.752974 7 log.go:181] (0xc0055e44d0) (0xc003bc1d60) Stream added, broadcasting: 1 I0915 11:46:28.755547 7 log.go:181] (0xc0055e44d0) Reply frame received for 1 I0915 11:46:28.755583 7 log.go:181] (0xc0055e44d0) (0xc003bc1e00) Create stream I0915 11:46:28.755595 7 log.go:181] (0xc0055e44d0) (0xc003bc1e00) Stream added, broadcasting: 3 I0915 11:46:28.757578 7 log.go:181] (0xc0055e44d0) Reply frame received for 3 I0915 11:46:28.757613 7 log.go:181] (0xc0055e44d0) (0xc004096c80) Create stream I0915 11:46:28.757630 7 log.go:181] (0xc0055e44d0) (0xc004096c80) Stream added, broadcasting: 5 I0915 11:46:28.758799 7 log.go:181] (0xc0055e44d0) Reply frame received for 5 I0915 11:46:28.836737 7 log.go:181] (0xc0055e44d0) Data frame received for 5 I0915 11:46:28.836781 7 log.go:181] (0xc004096c80) (5) Data frame handling I0915 11:46:28.836806 7 log.go:181] (0xc0055e44d0) Data frame received for 3 I0915 11:46:28.836825 7 log.go:181] (0xc003bc1e00) (3) Data frame handling I0915 11:46:28.836847 7 log.go:181] (0xc003bc1e00) (3) Data frame sent I0915 11:46:28.836872 7 log.go:181] (0xc0055e44d0) Data frame received for 3 I0915 11:46:28.836886 7 log.go:181] (0xc003bc1e00) (3) Data frame handling I0915 11:46:28.838363 7 log.go:181] (0xc0055e44d0) Data frame received for 1 I0915 11:46:28.838412 7 log.go:181] (0xc003bc1d60) (1) Data frame handling I0915 11:46:28.838438 7 log.go:181] (0xc003bc1d60) (1) Data frame sent I0915 11:46:28.838455 7 log.go:181] (0xc0055e44d0) (0xc003bc1d60) Stream removed, broadcasting: 1 I0915 11:46:28.838487 7 log.go:181] (0xc0055e44d0) Go away received I0915 11:46:28.838671 7 log.go:181] (0xc0055e44d0) (0xc003bc1d60) Stream removed, broadcasting: 1 I0915 11:46:28.838718 7 log.go:181] (0xc0055e44d0) (0xc003bc1e00) Stream removed, broadcasting: 3 I0915 11:46:28.838745 7 log.go:181] (0xc0055e44d0) (0xc004096c80) Stream removed, broadcasting: 5 Sep 15 11:46:28.838: INFO: Exec stderr: "" Sep 15 11:46:28.838: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:28.838: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:28.871358 7 log.go:181] (0xc002f9abb0) (0xc0010bfc20) Create stream I0915 11:46:28.871398 7 log.go:181] (0xc002f9abb0) (0xc0010bfc20) Stream added, broadcasting: 1 I0915 11:46:28.874331 7 log.go:181] (0xc002f9abb0) Reply frame received for 1 I0915 11:46:28.874366 7 log.go:181] (0xc002f9abb0) (0xc0010bfcc0) Create stream I0915 11:46:28.874385 7 log.go:181] (0xc002f9abb0) (0xc0010bfcc0) Stream added, broadcasting: 3 I0915 11:46:28.875350 7 log.go:181] (0xc002f9abb0) Reply frame received for 3 I0915 11:46:28.875405 7 log.go:181] (0xc002f9abb0) (0xc003bc1ea0) Create stream I0915 11:46:28.875422 7 log.go:181] (0xc002f9abb0) (0xc003bc1ea0) Stream added, broadcasting: 5 I0915 11:46:28.876537 7 log.go:181] (0xc002f9abb0) Reply frame received for 5 I0915 11:46:28.933913 7 log.go:181] (0xc002f9abb0) Data frame received for 3 I0915 11:46:28.933947 7 log.go:181] (0xc0010bfcc0) (3) Data frame handling I0915 11:46:28.933959 7 log.go:181] (0xc0010bfcc0) (3) Data frame sent I0915 11:46:28.933970 7 log.go:181] (0xc002f9abb0) Data frame received for 3 I0915 11:46:28.933986 7 log.go:181] (0xc0010bfcc0) (3) Data frame handling I0915 11:46:28.934005 7 log.go:181] (0xc002f9abb0) Data frame received for 5 I0915 11:46:28.934013 7 log.go:181] (0xc003bc1ea0) (5) Data frame handling I0915 11:46:28.935513 7 log.go:181] (0xc002f9abb0) Data frame received for 1 I0915 11:46:28.935532 7 log.go:181] (0xc0010bfc20) (1) Data frame handling I0915 11:46:28.935542 7 log.go:181] (0xc0010bfc20) (1) Data frame sent I0915 11:46:28.935554 7 log.go:181] (0xc002f9abb0) (0xc0010bfc20) Stream removed, broadcasting: 1 I0915 11:46:28.935604 7 log.go:181] (0xc002f9abb0) Go away received I0915 11:46:28.935660 7 log.go:181] (0xc002f9abb0) (0xc0010bfc20) Stream removed, broadcasting: 1 I0915 11:46:28.935696 7 log.go:181] (0xc002f9abb0) (0xc0010bfcc0) Stream removed, broadcasting: 3 I0915 11:46:28.935723 7 log.go:181] (0xc002f9abb0) (0xc003bc1ea0) Stream removed, broadcasting: 5 Sep 15 11:46:28.935: INFO: Exec stderr: "" Sep 15 11:46:28.935: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:28.935: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:28.966398 7 log.go:181] (0xc00088b810) (0xc0042a7040) Create stream I0915 11:46:28.966443 7 log.go:181] (0xc00088b810) (0xc0042a7040) Stream added, broadcasting: 1 I0915 11:46:28.968604 7 log.go:181] (0xc00088b810) Reply frame received for 1 I0915 11:46:28.968640 7 log.go:181] (0xc00088b810) (0xc0014ac1e0) Create stream I0915 11:46:28.968650 7 log.go:181] (0xc00088b810) (0xc0014ac1e0) Stream added, broadcasting: 3 I0915 11:46:28.969353 7 log.go:181] (0xc00088b810) Reply frame received for 3 I0915 11:46:28.969398 7 log.go:181] (0xc00088b810) (0xc004096d20) Create stream I0915 11:46:28.969411 7 log.go:181] (0xc00088b810) (0xc004096d20) Stream added, broadcasting: 5 I0915 11:46:28.970176 7 log.go:181] (0xc00088b810) Reply frame received for 5 I0915 11:46:29.032858 7 log.go:181] (0xc00088b810) Data frame received for 5 I0915 11:46:29.032904 7 log.go:181] (0xc004096d20) (5) Data frame handling I0915 11:46:29.032938 7 log.go:181] (0xc00088b810) Data frame received for 3 I0915 11:46:29.032960 7 log.go:181] (0xc0014ac1e0) (3) Data frame handling I0915 11:46:29.032985 7 log.go:181] (0xc0014ac1e0) (3) Data frame sent I0915 11:46:29.033000 7 log.go:181] (0xc00088b810) Data frame received for 3 I0915 11:46:29.033021 7 log.go:181] (0xc0014ac1e0) (3) Data frame handling I0915 11:46:29.034240 7 log.go:181] (0xc00088b810) Data frame received for 1 I0915 11:46:29.034274 7 log.go:181] (0xc0042a7040) (1) Data frame handling I0915 11:46:29.034284 7 log.go:181] (0xc0042a7040) (1) Data frame sent I0915 11:46:29.034293 7 log.go:181] (0xc00088b810) (0xc0042a7040) Stream removed, broadcasting: 1 I0915 11:46:29.034369 7 log.go:181] (0xc00088b810) Go away received I0915 11:46:29.034406 7 log.go:181] (0xc00088b810) (0xc0042a7040) Stream removed, broadcasting: 1 I0915 11:46:29.034433 7 log.go:181] (0xc00088b810) (0xc0014ac1e0) Stream removed, broadcasting: 3 I0915 11:46:29.034539 7 log.go:181] (0xc00088b810) (0xc004096d20) Stream removed, broadcasting: 5 Sep 15 11:46:29.034: INFO: Exec stderr: "" Sep 15 11:46:29.034: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6362 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:46:29.034: INFO: >>> kubeConfig: /root/.kube/config I0915 11:46:29.073983 7 log.go:181] (0xc002f9b3f0) (0xc0010bfea0) Create stream I0915 11:46:29.074008 7 log.go:181] (0xc002f9b3f0) (0xc0010bfea0) Stream added, broadcasting: 1 I0915 11:46:29.076660 7 log.go:181] (0xc002f9b3f0) Reply frame received for 1 I0915 11:46:29.076694 7 log.go:181] (0xc002f9b3f0) (0xc004096dc0) Create stream I0915 11:46:29.076701 7 log.go:181] (0xc002f9b3f0) (0xc004096dc0) Stream added, broadcasting: 3 I0915 11:46:29.077654 7 log.go:181] (0xc002f9b3f0) Reply frame received for 3 I0915 11:46:29.077694 7 log.go:181] (0xc002f9b3f0) (0xc004096e60) Create stream I0915 11:46:29.077707 7 log.go:181] (0xc002f9b3f0) (0xc004096e60) Stream added, broadcasting: 5 I0915 11:46:29.078578 7 log.go:181] (0xc002f9b3f0) Reply frame received for 5 I0915 11:46:29.148487 7 log.go:181] (0xc002f9b3f0) Data frame received for 5 I0915 11:46:29.148534 7 log.go:181] (0xc004096e60) (5) Data frame handling I0915 11:46:29.148563 7 log.go:181] (0xc002f9b3f0) Data frame received for 3 I0915 11:46:29.148582 7 log.go:181] (0xc004096dc0) (3) Data frame handling I0915 11:46:29.148604 7 log.go:181] (0xc004096dc0) (3) Data frame sent I0915 11:46:29.148639 7 log.go:181] (0xc002f9b3f0) Data frame received for 3 I0915 11:46:29.148657 7 log.go:181] (0xc004096dc0) (3) Data frame handling I0915 11:46:29.149624 7 log.go:181] (0xc002f9b3f0) Data frame received for 1 I0915 11:46:29.149648 7 log.go:181] (0xc0010bfea0) (1) Data frame handling I0915 11:46:29.149663 7 log.go:181] (0xc0010bfea0) (1) Data frame sent I0915 11:46:29.149679 7 log.go:181] (0xc002f9b3f0) (0xc0010bfea0) Stream removed, broadcasting: 1 I0915 11:46:29.149707 7 log.go:181] (0xc002f9b3f0) Go away received I0915 11:46:29.149810 7 log.go:181] (0xc002f9b3f0) (0xc0010bfea0) Stream removed, broadcasting: 1 I0915 11:46:29.149828 7 log.go:181] (0xc002f9b3f0) (0xc004096dc0) Stream removed, broadcasting: 3 I0915 11:46:29.149834 7 log.go:181] (0xc002f9b3f0) (0xc004096e60) Stream removed, broadcasting: 5 Sep 15 11:46:29.149: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:46:29.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6362" for this suite. • [SLOW TEST:13.720 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":243,"skipped":4114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:46:29.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-375bd46a-c281-46f6-baa1-1e07e6702348 STEP: Creating configMap with name cm-test-opt-upd-7900a72b-1768-4987-b534-2930811fe0fa STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-375bd46a-c281-46f6-baa1-1e07e6702348 STEP: Updating configmap cm-test-opt-upd-7900a72b-1768-4987-b534-2930811fe0fa STEP: Creating configMap with name cm-test-opt-create-3a7fa19f-71a8-47a9-aebd-bf3eee9398c4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:48:02.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4516" for this suite. • [SLOW TEST:92.935 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":244,"skipped":4148,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:48:02.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:48:02.144: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 15 11:48:04.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7267 create -f -' Sep 15 11:48:07.659: INFO: stderr: "" Sep 15 11:48:07.659: INFO: stdout: "e2e-test-crd-publish-openapi-4089-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 15 11:48:07.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7267 delete e2e-test-crd-publish-openapi-4089-crds test-cr' Sep 15 11:48:07.813: INFO: stderr: "" Sep 15 11:48:07.813: INFO: stdout: "e2e-test-crd-publish-openapi-4089-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Sep 15 11:48:07.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7267 apply -f -' Sep 15 11:48:08.572: INFO: stderr: "" Sep 15 11:48:08.572: INFO: stdout: "e2e-test-crd-publish-openapi-4089-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 15 11:48:08.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7267 delete e2e-test-crd-publish-openapi-4089-crds test-cr' Sep 15 11:48:08.821: INFO: stderr: "" Sep 15 11:48:08.822: INFO: stdout: "e2e-test-crd-publish-openapi-4089-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Sep 15 11:48:08.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4089-crds' Sep 15 11:48:09.245: INFO: stderr: "" Sep 15 11:48:09.245: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4089-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:48:12.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7267" for this suite. • [SLOW TEST:10.117 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":245,"skipped":4148,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:48:12.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2266 STEP: creating service affinity-nodeport-transition in namespace services-2266 STEP: creating replication controller affinity-nodeport-transition in namespace services-2266 I0915 11:48:12.397470 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-2266, replica count: 3 I0915 11:48:15.447881 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 11:48:18.448125 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 11:48:21.448510 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 11:48:21.463: INFO: Creating new exec pod Sep 15 11:48:26.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpod-affinitytnrsv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Sep 15 11:48:26.755: INFO: stderr: "I0915 11:48:26.645764 3019 log.go:181] (0xc0001fb130) (0xc000cf6820) Create stream\nI0915 11:48:26.645813 3019 log.go:181] (0xc0001fb130) (0xc000cf6820) Stream added, broadcasting: 1\nI0915 11:48:26.649587 3019 log.go:181] (0xc0001fb130) Reply frame received for 1\nI0915 11:48:26.649615 3019 log.go:181] (0xc0001fb130) (0xc000cf6000) Create stream\nI0915 11:48:26.649625 3019 log.go:181] (0xc0001fb130) (0xc000cf6000) Stream added, broadcasting: 3\nI0915 11:48:26.650375 3019 log.go:181] (0xc0001fb130) Reply frame received for 3\nI0915 11:48:26.650421 3019 log.go:181] (0xc0001fb130) (0xc000902140) Create stream\nI0915 11:48:26.650437 3019 log.go:181] (0xc0001fb130) (0xc000902140) Stream added, broadcasting: 5\nI0915 11:48:26.651036 3019 log.go:181] (0xc0001fb130) Reply frame received for 5\nI0915 11:48:26.747222 3019 log.go:181] (0xc0001fb130) Data frame received for 5\nI0915 11:48:26.747260 3019 log.go:181] (0xc000902140) (5) Data frame handling\nI0915 11:48:26.747285 3019 log.go:181] (0xc000902140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0915 11:48:26.747770 3019 log.go:181] (0xc0001fb130) Data frame received for 5\nI0915 11:48:26.747789 3019 log.go:181] (0xc000902140) (5) Data frame handling\nI0915 11:48:26.747798 3019 log.go:181] (0xc000902140) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0915 11:48:26.747982 3019 log.go:181] (0xc0001fb130) Data frame received for 3\nI0915 11:48:26.748006 3019 log.go:181] (0xc000cf6000) (3) Data frame handling\nI0915 11:48:26.748534 3019 log.go:181] (0xc0001fb130) Data frame received for 5\nI0915 11:48:26.748549 3019 log.go:181] (0xc000902140) (5) Data frame handling\nI0915 11:48:26.750468 3019 log.go:181] (0xc0001fb130) Data frame received for 1\nI0915 11:48:26.750490 3019 log.go:181] (0xc000cf6820) (1) Data frame handling\nI0915 11:48:26.750501 3019 log.go:181] (0xc000cf6820) (1) Data frame sent\nI0915 11:48:26.750530 3019 log.go:181] (0xc0001fb130) (0xc000cf6820) Stream removed, broadcasting: 1\nI0915 11:48:26.750561 3019 log.go:181] (0xc0001fb130) Go away received\nI0915 11:48:26.750905 3019 log.go:181] (0xc0001fb130) (0xc000cf6820) Stream removed, broadcasting: 1\nI0915 11:48:26.750925 3019 log.go:181] (0xc0001fb130) (0xc000cf6000) Stream removed, broadcasting: 3\nI0915 11:48:26.750937 3019 log.go:181] (0xc0001fb130) (0xc000902140) Stream removed, broadcasting: 5\n" Sep 15 11:48:26.756: INFO: stdout: "" Sep 15 11:48:26.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpod-affinitytnrsv -- /bin/sh -x -c nc -zv -t -w 2 10.109.115.163 80' Sep 15 11:48:27.014: INFO: stderr: "I0915 11:48:26.928727 3039 log.go:181] (0xc0006fb080) (0xc0006f28c0) Create stream\nI0915 11:48:26.928784 3039 log.go:181] (0xc0006fb080) (0xc0006f28c0) Stream added, broadcasting: 1\nI0915 11:48:26.936033 3039 log.go:181] (0xc0006fb080) Reply frame received for 1\nI0915 11:48:26.936066 3039 log.go:181] (0xc0006fb080) (0xc0008e8780) Create stream\nI0915 11:48:26.936075 3039 log.go:181] (0xc0006fb080) (0xc0008e8780) Stream added, broadcasting: 3\nI0915 11:48:26.937142 3039 log.go:181] (0xc0006fb080) Reply frame received for 3\nI0915 11:48:26.937179 3039 log.go:181] (0xc0006fb080) (0xc0006f2000) Create stream\nI0915 11:48:26.937189 3039 log.go:181] (0xc0006fb080) (0xc0006f2000) Stream added, broadcasting: 5\nI0915 11:48:26.938043 3039 log.go:181] (0xc0006fb080) Reply frame received for 5\nI0915 11:48:27.008007 3039 log.go:181] (0xc0006fb080) Data frame received for 3\nI0915 11:48:27.008067 3039 log.go:181] (0xc0008e8780) (3) Data frame handling\nI0915 11:48:27.008107 3039 log.go:181] (0xc0006fb080) Data frame received for 5\nI0915 11:48:27.008218 3039 log.go:181] (0xc0006f2000) (5) Data frame handling\nI0915 11:48:27.008267 3039 log.go:181] (0xc0006f2000) (5) Data frame sent\nI0915 11:48:27.008303 3039 log.go:181] (0xc0006fb080) Data frame received for 5\nI0915 11:48:27.008323 3039 log.go:181] (0xc0006f2000) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.115.163 80\nConnection to 10.109.115.163 80 port [tcp/http] succeeded!\nI0915 11:48:27.010222 3039 log.go:181] (0xc0006fb080) Data frame received for 1\nI0915 11:48:27.010256 3039 log.go:181] (0xc0006f28c0) (1) Data frame handling\nI0915 11:48:27.010276 3039 log.go:181] (0xc0006f28c0) (1) Data frame sent\nI0915 11:48:27.010316 3039 log.go:181] (0xc0006fb080) (0xc0006f28c0) Stream removed, broadcasting: 1\nI0915 11:48:27.010348 3039 log.go:181] (0xc0006fb080) Go away received\nI0915 11:48:27.010849 3039 log.go:181] (0xc0006fb080) (0xc0006f28c0) Stream removed, broadcasting: 1\nI0915 11:48:27.010873 3039 log.go:181] (0xc0006fb080) (0xc0008e8780) Stream removed, broadcasting: 3\nI0915 11:48:27.010886 3039 log.go:181] (0xc0006fb080) (0xc0006f2000) Stream removed, broadcasting: 5\n" Sep 15 11:48:27.014: INFO: stdout: "" Sep 15 11:48:27.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpod-affinitytnrsv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 32445' Sep 15 11:48:27.237: INFO: stderr: "I0915 11:48:27.156939 3057 log.go:181] (0xc000f16e70) (0xc0002f05a0) Create stream\nI0915 11:48:27.156995 3057 log.go:181] (0xc000f16e70) (0xc0002f05a0) Stream added, broadcasting: 1\nI0915 11:48:27.162229 3057 log.go:181] (0xc000f16e70) Reply frame received for 1\nI0915 11:48:27.162279 3057 log.go:181] (0xc000f16e70) (0xc00051ec80) Create stream\nI0915 11:48:27.162294 3057 log.go:181] (0xc000f16e70) (0xc00051ec80) Stream added, broadcasting: 3\nI0915 11:48:27.163254 3057 log.go:181] (0xc000f16e70) Reply frame received for 3\nI0915 11:48:27.163303 3057 log.go:181] (0xc000f16e70) (0xc0002f0e60) Create stream\nI0915 11:48:27.163322 3057 log.go:181] (0xc000f16e70) (0xc0002f0e60) Stream added, broadcasting: 5\nI0915 11:48:27.164299 3057 log.go:181] (0xc000f16e70) Reply frame received for 5\nI0915 11:48:27.231093 3057 log.go:181] (0xc000f16e70) Data frame received for 3\nI0915 11:48:27.231118 3057 log.go:181] (0xc00051ec80) (3) Data frame handling\nI0915 11:48:27.231135 3057 log.go:181] (0xc000f16e70) Data frame received for 5\nI0915 11:48:27.231140 3057 log.go:181] (0xc0002f0e60) (5) Data frame handling\nI0915 11:48:27.231146 3057 log.go:181] (0xc0002f0e60) (5) Data frame sent\nI0915 11:48:27.231150 3057 log.go:181] (0xc000f16e70) Data frame received for 5\nI0915 11:48:27.231158 3057 log.go:181] (0xc0002f0e60) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 32445\nConnection to 172.18.0.11 32445 port [tcp/32445] succeeded!\nI0915 11:48:27.232664 3057 log.go:181] (0xc000f16e70) Data frame received for 1\nI0915 11:48:27.232684 3057 log.go:181] (0xc0002f05a0) (1) Data frame handling\nI0915 11:48:27.232691 3057 log.go:181] (0xc0002f05a0) (1) Data frame sent\nI0915 11:48:27.232698 3057 log.go:181] (0xc000f16e70) (0xc0002f05a0) Stream removed, broadcasting: 1\nI0915 11:48:27.232706 3057 log.go:181] (0xc000f16e70) Go away received\nI0915 11:48:27.233030 3057 log.go:181] (0xc000f16e70) (0xc0002f05a0) Stream removed, broadcasting: 1\nI0915 11:48:27.233048 3057 log.go:181] (0xc000f16e70) (0xc00051ec80) Stream removed, broadcasting: 3\nI0915 11:48:27.233056 3057 log.go:181] (0xc000f16e70) (0xc0002f0e60) Stream removed, broadcasting: 5\n" Sep 15 11:48:27.237: INFO: stdout: "" Sep 15 11:48:27.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpod-affinitytnrsv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32445' Sep 15 11:48:27.482: INFO: stderr: "I0915 11:48:27.390647 3075 log.go:181] (0xc000d174a0) (0xc000d0e960) Create stream\nI0915 11:48:27.390707 3075 log.go:181] (0xc000d174a0) (0xc000d0e960) Stream added, broadcasting: 1\nI0915 11:48:27.395744 3075 log.go:181] (0xc000d174a0) Reply frame received for 1\nI0915 11:48:27.395787 3075 log.go:181] (0xc000d174a0) (0xc000caa000) Create stream\nI0915 11:48:27.395799 3075 log.go:181] (0xc000d174a0) (0xc000caa000) Stream added, broadcasting: 3\nI0915 11:48:27.396696 3075 log.go:181] (0xc000d174a0) Reply frame received for 3\nI0915 11:48:27.396737 3075 log.go:181] (0xc000d174a0) (0xc000d0e000) Create stream\nI0915 11:48:27.396751 3075 log.go:181] (0xc000d174a0) (0xc000d0e000) Stream added, broadcasting: 5\nI0915 11:48:27.397574 3075 log.go:181] (0xc000d174a0) Reply frame received for 5\nI0915 11:48:27.475275 3075 log.go:181] (0xc000d174a0) Data frame received for 5\nI0915 11:48:27.475317 3075 log.go:181] (0xc000d0e000) (5) Data frame handling\nI0915 11:48:27.475332 3075 log.go:181] (0xc000d0e000) (5) Data frame sent\nI0915 11:48:27.475348 3075 log.go:181] (0xc000d174a0) Data frame received for 5\nI0915 11:48:27.475360 3075 log.go:181] (0xc000d0e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32445\nConnection to 172.18.0.12 32445 port [tcp/32445] succeeded!\nI0915 11:48:27.475385 3075 log.go:181] (0xc000d174a0) Data frame received for 3\nI0915 11:48:27.475400 3075 log.go:181] (0xc000caa000) (3) Data frame handling\nI0915 11:48:27.476992 3075 log.go:181] (0xc000d174a0) Data frame received for 1\nI0915 11:48:27.477015 3075 log.go:181] (0xc000d0e960) (1) Data frame handling\nI0915 11:48:27.477027 3075 log.go:181] (0xc000d0e960) (1) Data frame sent\nI0915 11:48:27.477059 3075 log.go:181] (0xc000d174a0) (0xc000d0e960) Stream removed, broadcasting: 1\nI0915 11:48:27.477078 3075 log.go:181] (0xc000d174a0) Go away received\nI0915 11:48:27.477733 3075 log.go:181] (0xc000d174a0) (0xc000d0e960) Stream removed, broadcasting: 1\nI0915 11:48:27.477764 3075 log.go:181] (0xc000d174a0) (0xc000caa000) Stream removed, broadcasting: 3\nI0915 11:48:27.477776 3075 log.go:181] (0xc000d174a0) (0xc000d0e000) Stream removed, broadcasting: 5\n" Sep 15 11:48:27.482: INFO: stdout: "" Sep 15 11:48:27.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpod-affinitytnrsv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:32445/ ; done' Sep 15 11:48:27.828: INFO: stderr: "I0915 11:48:27.643875 3093 log.go:181] (0xc000844f20) (0xc0000b99a0) Create stream\nI0915 11:48:27.643930 3093 log.go:181] (0xc000844f20) (0xc0000b99a0) Stream added, broadcasting: 1\nI0915 11:48:27.649527 3093 log.go:181] (0xc000844f20) Reply frame received for 1\nI0915 11:48:27.649572 3093 log.go:181] (0xc000844f20) (0xc0000b81e0) Create stream\nI0915 11:48:27.649589 3093 log.go:181] (0xc000844f20) (0xc0000b81e0) Stream added, broadcasting: 3\nI0915 11:48:27.650492 3093 log.go:181] (0xc000844f20) Reply frame received for 3\nI0915 11:48:27.650525 3093 log.go:181] (0xc000844f20) (0xc000415040) Create stream\nI0915 11:48:27.650535 3093 log.go:181] (0xc000844f20) (0xc000415040) Stream added, broadcasting: 5\nI0915 11:48:27.651618 3093 log.go:181] (0xc000844f20) Reply frame received for 5\nI0915 11:48:27.721841 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.721875 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.721888 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.721905 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.721913 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.721923 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.725714 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.725732 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.725747 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.726428 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.726465 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.726484 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.726508 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.726521 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.726544 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.732451 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.732468 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.732477 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.732935 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.732966 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.732998 3093 log.go:181] (0xc000415040) (5) Data frame sent\nI0915 11:48:27.733020 3093 log.go:181] (0xc000844f20) Data frame received for 5\n+ echo\n+ curl -qI0915 11:48:27.733041 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.733076 3093 log.go:181] (0xc000415040) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.733097 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.733111 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.733128 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.736795 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.736832 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.736856 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.736991 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.737013 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.737025 3093 log.go:181] (0xc000415040) (5) Data frame sent\nI0915 11:48:27.737036 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.737046 3093 log.go:181] (0xc000415040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.737069 3093 log.go:181] (0xc000415040) (5) Data frame sent\nI0915 11:48:27.737184 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.737203 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.737222 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.742219 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.742246 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.742265 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.743006 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.743044 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.743056 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.743073 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.743081 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.743091 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.748355 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.748386 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.748420 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.749114 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.749137 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.749148 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.749166 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.749175 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.749185 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.752892 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.752931 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.752959 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.753693 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.753717 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.753744 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.753927 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.753952 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.753977 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.761104 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.761133 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.761162 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.761735 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.761776 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.761795 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.761813 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.761823 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.761842 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.766356 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.766370 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.766377 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.766908 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.766945 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.766975 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.767008 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.767033 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.767052 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.774215 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.774229 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.774236 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.774998 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.775021 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.775034 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.775045 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.775079 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.775109 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.780892 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.780907 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.780918 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.781538 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.781557 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.781567 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.781582 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.781591 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.781600 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.789051 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.789068 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.789081 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.789924 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.789940 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.789956 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.789976 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.789987 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.790003 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.794070 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.794093 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.794106 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.794946 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.794977 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.795013 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.795032 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.795052 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.795066 3093 log.go:181] (0xc000415040) (5) Data frame sent\nI0915 11:48:27.795087 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.795101 3093 log.go:181] (0xc000415040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.795125 3093 log.go:181] (0xc000415040) (5) Data frame sent\nI0915 11:48:27.798137 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.798158 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.798178 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.798919 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.798937 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.798944 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.798959 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.798977 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.798995 3093 log.go:181] (0xc000415040) (5) Data frame sent\nI0915 11:48:27.799005 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.799015 3093 log.go:181] (0xc000415040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.799042 3093 log.go:181] (0xc000415040) (5) Data frame sent\nI0915 11:48:27.806197 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.806243 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.806285 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.806984 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.806997 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.807002 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.807028 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.807047 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.807058 3093 log.go:181] (0xc000415040) (5) Data frame sent\nI0915 11:48:27.807067 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.807076 3093 log.go:181] (0xc000415040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.807103 3093 log.go:181] (0xc000415040) (5) Data frame sent\nI0915 11:48:27.814032 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.814057 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.814090 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.814447 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.814460 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.814469 3093 log.go:181] (0xc000415040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:27.814577 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.814596 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.814614 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.819891 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.819907 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.819917 3093 log.go:181] (0xc0000b81e0) (3) Data frame sent\nI0915 11:48:27.821252 3093 log.go:181] (0xc000844f20) Data frame received for 5\nI0915 11:48:27.821284 3093 log.go:181] (0xc000415040) (5) Data frame handling\nI0915 11:48:27.821371 3093 log.go:181] (0xc000844f20) Data frame received for 3\nI0915 11:48:27.821440 3093 log.go:181] (0xc0000b81e0) (3) Data frame handling\nI0915 11:48:27.823133 3093 log.go:181] (0xc000844f20) Data frame received for 1\nI0915 11:48:27.823162 3093 log.go:181] (0xc0000b99a0) (1) Data frame handling\nI0915 11:48:27.823176 3093 log.go:181] (0xc0000b99a0) (1) Data frame sent\nI0915 11:48:27.823205 3093 log.go:181] (0xc000844f20) (0xc0000b99a0) Stream removed, broadcasting: 1\nI0915 11:48:27.823240 3093 log.go:181] (0xc000844f20) Go away received\nI0915 11:48:27.823673 3093 log.go:181] (0xc000844f20) (0xc0000b99a0) Stream removed, broadcasting: 1\nI0915 11:48:27.823691 3093 log.go:181] (0xc000844f20) (0xc0000b81e0) Stream removed, broadcasting: 3\nI0915 11:48:27.823700 3093 log.go:181] (0xc000844f20) (0xc000415040) Stream removed, broadcasting: 5\n" Sep 15 11:48:27.829: INFO: stdout: "\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-h6dnc\naffinity-nodeport-transition-cqm8x\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-h6dnc\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-h6dnc\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-cqm8x\naffinity-nodeport-transition-h6dnc\naffinity-nodeport-transition-cqm8x\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-h6dnc\naffinity-nodeport-transition-h6dnc\naffinity-nodeport-transition-h6dnc" Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-h6dnc Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-cqm8x Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-h6dnc Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-h6dnc Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-cqm8x Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-h6dnc Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-cqm8x Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-h6dnc Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-h6dnc Sep 15 11:48:27.829: INFO: Received response from host: affinity-nodeport-transition-h6dnc Sep 15 11:48:27.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpod-affinitytnrsv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:32445/ ; done' Sep 15 11:48:28.185: INFO: stderr: "I0915 11:48:27.978448 3111 log.go:181] (0xc0003bee70) (0xc000530140) Create stream\nI0915 11:48:27.978494 3111 log.go:181] (0xc0003bee70) (0xc000530140) Stream added, broadcasting: 1\nI0915 11:48:27.983839 3111 log.go:181] (0xc0003bee70) Reply frame received for 1\nI0915 11:48:27.983896 3111 log.go:181] (0xc0003bee70) (0xc000642000) Create stream\nI0915 11:48:27.983910 3111 log.go:181] (0xc0003bee70) (0xc000642000) Stream added, broadcasting: 3\nI0915 11:48:27.987675 3111 log.go:181] (0xc0003bee70) Reply frame received for 3\nI0915 11:48:27.987724 3111 log.go:181] (0xc0003bee70) (0xc00058e140) Create stream\nI0915 11:48:27.987746 3111 log.go:181] (0xc0003bee70) (0xc00058e140) Stream added, broadcasting: 5\nI0915 11:48:27.988744 3111 log.go:181] (0xc0003bee70) Reply frame received for 5\nI0915 11:48:28.072725 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.072778 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.072801 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.072828 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.072838 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.072865 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.079378 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.079422 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.079454 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.079781 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.079819 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.079859 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.079917 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.079952 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.079989 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.086962 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.086995 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.087020 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.087706 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.087738 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.087762 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.087771 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.087786 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.087792 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.095146 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.095162 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.095175 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.096271 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.096320 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.096335 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.096368 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.096389 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.096413 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.099649 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.099670 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.099682 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.100410 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.100440 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.100458 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.100483 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.100500 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.100523 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.106053 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.106076 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.106087 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.106689 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.106715 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.106743 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.106760 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.106782 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.106796 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.113946 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.113965 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.113981 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.114496 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.114533 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.114560 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.114632 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.114660 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.114711 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.119056 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.119070 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.119079 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.119837 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.119856 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.119875 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.119899 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.119918 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.119944 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.123373 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.123413 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.123447 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.123918 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.123942 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.123959 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.123984 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.124012 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.124041 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.129673 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.129707 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.129742 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.130379 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.130401 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.130412 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.130644 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.130669 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.130686 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.134460 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.134477 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.134487 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.135175 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.135187 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.135193 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.135211 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.135234 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.135247 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.142638 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.142661 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.142680 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.143500 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.143519 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.143532 3111 log.go:181] (0xc00058e140) (5) Data frame sent\nI0915 11:48:28.143543 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.143552 3111 log.go:181] (0xc00058e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.143573 3111 log.go:181] (0xc00058e140) (5) Data frame sent\nI0915 11:48:28.143613 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.143669 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.143681 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.148647 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.148690 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.148726 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.149230 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.149256 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.149284 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.149298 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.149314 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.149326 3111 log.go:181] (0xc00058e140) (5) Data frame sent\nI0915 11:48:28.149338 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.149348 3111 log.go:181] (0xc00058e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.149392 3111 log.go:181] (0xc00058e140) (5) Data frame sent\nI0915 11:48:28.156313 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.156331 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.156345 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.156980 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.157003 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.157012 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.157024 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.157031 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.157037 3111 log.go:181] (0xc00058e140) (5) Data frame sent\nI0915 11:48:28.157044 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.157052 3111 log.go:181] (0xc00058e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.157074 3111 log.go:181] (0xc00058e140) (5) Data frame sent\nI0915 11:48:28.162282 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.162306 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.162325 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.162872 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.162902 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.162921 3111 log.go:181] (0xc00058e140) (5) Data frame sent\nI0915 11:48:28.162936 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.162956 3111 log.go:181] (0xc00058e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.162981 3111 log.go:181] (0xc00058e140) (5) Data frame sent\nI0915 11:48:28.163006 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.163021 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.163036 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.170222 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.170245 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.170268 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.171105 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.171138 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.171171 3111 log.go:181] (0xc00058e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32445/\nI0915 11:48:28.171200 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.171220 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.171246 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.178029 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.178057 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.178093 3111 log.go:181] (0xc000642000) (3) Data frame sent\nI0915 11:48:28.179042 3111 log.go:181] (0xc0003bee70) Data frame received for 3\nI0915 11:48:28.179073 3111 log.go:181] (0xc000642000) (3) Data frame handling\nI0915 11:48:28.179185 3111 log.go:181] (0xc0003bee70) Data frame received for 5\nI0915 11:48:28.179207 3111 log.go:181] (0xc00058e140) (5) Data frame handling\nI0915 11:48:28.180820 3111 log.go:181] (0xc0003bee70) Data frame received for 1\nI0915 11:48:28.180854 3111 log.go:181] (0xc000530140) (1) Data frame handling\nI0915 11:48:28.180876 3111 log.go:181] (0xc000530140) (1) Data frame sent\nI0915 11:48:28.180899 3111 log.go:181] (0xc0003bee70) (0xc000530140) Stream removed, broadcasting: 1\nI0915 11:48:28.180923 3111 log.go:181] (0xc0003bee70) Go away received\nI0915 11:48:28.181310 3111 log.go:181] (0xc0003bee70) (0xc000530140) Stream removed, broadcasting: 1\nI0915 11:48:28.181333 3111 log.go:181] (0xc0003bee70) (0xc000642000) Stream removed, broadcasting: 3\nI0915 11:48:28.181344 3111 log.go:181] (0xc0003bee70) (0xc00058e140) Stream removed, broadcasting: 5\n" Sep 15 11:48:28.186: INFO: stdout: "\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2\naffinity-nodeport-transition-dvdf2" Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Received response from host: affinity-nodeport-transition-dvdf2 Sep 15 11:48:28.186: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-2266, will wait for the garbage collector to delete the pods Sep 15 11:48:28.286: INFO: Deleting ReplicationController affinity-nodeport-transition took: 16.281591ms Sep 15 11:48:28.686: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.23581ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:48:43.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2266" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:31.169 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":246,"skipped":4162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:48:43.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5451, will wait for the garbage collector to delete the pods Sep 15 11:48:49.496: INFO: Deleting Job.batch foo took: 6.931873ms Sep 15 11:48:49.597: INFO: Terminating Job.batch foo pods took: 100.299785ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:49:33.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5451" for this suite. • [SLOW TEST:49.940 seconds] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":247,"skipped":4204,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:49:33.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 15 11:49:33.365: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 15 11:49:33.372: INFO: Waiting for terminating namespaces to be deleted... Sep 15 11:49:33.374: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 15 11:49:33.378: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 11:49:33.378: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 11:49:33.378: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 11:49:33.378: INFO: Container kube-proxy ready: true, restart count 0 Sep 15 11:49:33.378: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 15 11:49:33.382: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 11:49:33.382: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 11:49:33.382: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 11:49:33.382: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6ee06e82-3ad5-4407-9d16-4dd7e76ef7df 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-6ee06e82-3ad5-4407-9d16-4dd7e76ef7df off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6ee06e82-3ad5-4407-9d16-4dd7e76ef7df [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:49:51.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6272" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.298 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":248,"skipped":4208,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:49:51.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Sep 15 11:49:56.188: INFO: Successfully updated pod "adopt-release-dm4fl" STEP: Checking that the Job readopts the Pod Sep 15 11:49:56.188: INFO: Waiting up to 15m0s for pod "adopt-release-dm4fl" in namespace "job-1436" to be "adopted" Sep 15 11:49:56.212: INFO: Pod "adopt-release-dm4fl": Phase="Running", Reason="", readiness=true. Elapsed: 24.197091ms Sep 15 11:49:58.215: INFO: Pod "adopt-release-dm4fl": Phase="Running", Reason="", readiness=true. Elapsed: 2.026362411s Sep 15 11:49:58.215: INFO: Pod "adopt-release-dm4fl" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Sep 15 11:49:58.782: INFO: Successfully updated pod "adopt-release-dm4fl" STEP: Checking that the Job releases the Pod Sep 15 11:49:58.782: INFO: Waiting up to 15m0s for pod "adopt-release-dm4fl" in namespace "job-1436" to be "released" Sep 15 11:49:58.792: INFO: Pod "adopt-release-dm4fl": Phase="Running", Reason="", readiness=true. Elapsed: 9.922571ms Sep 15 11:50:00.794: INFO: Pod "adopt-release-dm4fl": Phase="Running", Reason="", readiness=true. Elapsed: 2.012814412s Sep 15 11:50:00.794: INFO: Pod "adopt-release-dm4fl" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:50:00.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1436" for this suite. • [SLOW TEST:9.181 seconds] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":249,"skipped":4222,"failed":0} [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:50:00.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 15 11:50:05.884: INFO: Successfully updated pod "annotationupdate3ee57346-a48b-4d61-9cf9-02502902e372" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:50:09.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7905" for this suite. • [SLOW TEST:9.133 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":250,"skipped":4222,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:50:09.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Sep 15 11:50:09.998: INFO: Waiting up to 5m0s for pod "pod-000b4f6c-0948-40ee-9476-0644b12291c2" in namespace "emptydir-7059" to be "Succeeded or Failed" Sep 15 11:50:10.004: INFO: Pod "pod-000b4f6c-0948-40ee-9476-0644b12291c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052801ms Sep 15 11:50:12.093: INFO: Pod "pod-000b4f6c-0948-40ee-9476-0644b12291c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094516206s Sep 15 11:50:14.271: INFO: Pod "pod-000b4f6c-0948-40ee-9476-0644b12291c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.272865431s STEP: Saw pod success Sep 15 11:50:14.271: INFO: Pod "pod-000b4f6c-0948-40ee-9476-0644b12291c2" satisfied condition "Succeeded or Failed" Sep 15 11:50:14.274: INFO: Trying to get logs from node kali-worker2 pod pod-000b4f6c-0948-40ee-9476-0644b12291c2 container test-container: STEP: delete the pod Sep 15 11:50:14.415: INFO: Waiting for pod pod-000b4f6c-0948-40ee-9476-0644b12291c2 to disappear Sep 15 11:50:14.420: INFO: Pod pod-000b4f6c-0948-40ee-9476-0644b12291c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:50:14.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7059" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":251,"skipped":4239,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:50:14.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 15 11:50:14.515: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:50:28.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3783" for this suite. • [SLOW TEST:13.808 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":252,"skipped":4251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:50:28.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Sep 15 11:50:28.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config api-versions' Sep 15 11:50:28.573: INFO: stderr: "" Sep 15 11:50:28.573: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:50:28.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8331" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":253,"skipped":4282,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:50:28.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-zfjw STEP: Creating a pod to test atomic-volume-subpath Sep 15 11:50:28.671: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zfjw" in namespace "subpath-1815" to be "Succeeded or Failed" Sep 15 11:50:28.691: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Pending", Reason="", readiness=false. Elapsed: 19.681201ms Sep 15 11:50:30.735: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064217462s Sep 15 11:50:32.740: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 4.068880659s Sep 15 11:50:34.745: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 6.073661546s Sep 15 11:50:36.750: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 8.078801216s Sep 15 11:50:38.754: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 10.083417682s Sep 15 11:50:40.759: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 12.087893393s Sep 15 11:50:42.764: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 14.09281409s Sep 15 11:50:44.769: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 16.097759545s Sep 15 11:50:46.774: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 18.102505749s Sep 15 11:50:48.779: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 20.1074749s Sep 15 11:50:50.784: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Running", Reason="", readiness=true. Elapsed: 22.113086343s Sep 15 11:50:52.790: INFO: Pod "pod-subpath-test-downwardapi-zfjw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118548824s STEP: Saw pod success Sep 15 11:50:52.790: INFO: Pod "pod-subpath-test-downwardapi-zfjw" satisfied condition "Succeeded or Failed" Sep 15 11:50:52.792: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-zfjw container test-container-subpath-downwardapi-zfjw: STEP: delete the pod Sep 15 11:50:52.831: INFO: Waiting for pod pod-subpath-test-downwardapi-zfjw to disappear Sep 15 11:50:52.843: INFO: Pod pod-subpath-test-downwardapi-zfjw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zfjw Sep 15 11:50:52.843: INFO: Deleting pod "pod-subpath-test-downwardapi-zfjw" in namespace "subpath-1815" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:50:52.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1815" for this suite. • [SLOW TEST:24.294 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":254,"skipped":4285,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:50:52.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-6a544765-7924-4570-bdf0-68ae9eeeda17 STEP: Creating a pod to test consume configMaps Sep 15 11:50:52.996: INFO: Waiting up to 5m0s for pod "pod-configmaps-564d5e3a-fe25-4953-be2d-dccc18d3e768" in namespace "configmap-7248" to be "Succeeded or Failed" Sep 15 11:50:53.002: INFO: Pod "pod-configmaps-564d5e3a-fe25-4953-be2d-dccc18d3e768": Phase="Pending", Reason="", readiness=false. Elapsed: 5.441549ms Sep 15 11:50:55.005: INFO: Pod "pod-configmaps-564d5e3a-fe25-4953-be2d-dccc18d3e768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009112935s Sep 15 11:50:57.010: INFO: Pod "pod-configmaps-564d5e3a-fe25-4953-be2d-dccc18d3e768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013283247s STEP: Saw pod success Sep 15 11:50:57.010: INFO: Pod "pod-configmaps-564d5e3a-fe25-4953-be2d-dccc18d3e768" satisfied condition "Succeeded or Failed" Sep 15 11:50:57.012: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-564d5e3a-fe25-4953-be2d-dccc18d3e768 container configmap-volume-test: STEP: delete the pod Sep 15 11:50:57.133: INFO: Waiting for pod pod-configmaps-564d5e3a-fe25-4953-be2d-dccc18d3e768 to disappear Sep 15 11:50:57.266: INFO: Pod pod-configmaps-564d5e3a-fe25-4953-be2d-dccc18d3e768 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:50:57.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7248" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":255,"skipped":4289,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:50:57.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-778cffa6-567e-4503-9804-a6544b5a99a7 in namespace container-probe-4138 Sep 15 11:51:01.367: INFO: Started pod busybox-778cffa6-567e-4503-9804-a6544b5a99a7 in namespace container-probe-4138 STEP: checking the pod's current state and verifying that restartCount is present Sep 15 11:51:01.374: INFO: Initial restart count of pod busybox-778cffa6-567e-4503-9804-a6544b5a99a7 is 0 Sep 15 11:51:49.492: INFO: Restart count of pod container-probe-4138/busybox-778cffa6-567e-4503-9804-a6544b5a99a7 is now 1 (48.117658757s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:51:49.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4138" for this suite. • [SLOW TEST:52.262 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":256,"skipped":4304,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:51:49.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Sep 15 11:51:49.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-4523 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Sep 15 11:51:49.771: INFO: stderr: "" Sep 15 11:51:49.771: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Sep 15 11:51:49.771: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Sep 15 11:51:49.771: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4523" to be "running and ready, or succeeded" Sep 15 11:51:49.791: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 20.026466ms Sep 15 11:51:51.796: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025190852s Sep 15 11:51:53.802: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.03093372s Sep 15 11:51:53.802: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Sep 15 11:51:53.802: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Sep 15 11:51:53.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4523' Sep 15 11:51:53.942: INFO: stderr: "" Sep 15 11:51:53.942: INFO: stdout: "I0915 11:51:52.271737 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/4mm4 433\nI0915 11:51:52.471879 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/brd 254\nI0915 11:51:52.671932 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/g24 346\nI0915 11:51:52.871893 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/mg47 512\nI0915 11:51:53.071875 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/ktcm 267\nI0915 11:51:53.271903 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/6ws 387\nI0915 11:51:53.471917 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/p7lf 506\nI0915 11:51:53.671851 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/htj6 552\nI0915 11:51:53.871849 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/7qx 458\n" STEP: limiting log lines Sep 15 11:51:53.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4523 --tail=1' Sep 15 11:51:54.058: INFO: stderr: "" Sep 15 11:51:54.058: INFO: stdout: "I0915 11:51:53.871849 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/7qx 458\n" Sep 15 11:51:54.058: INFO: got output "I0915 11:51:53.871849 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/7qx 458\n" STEP: limiting log bytes Sep 15 11:51:54.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4523 --limit-bytes=1' Sep 15 11:51:54.171: INFO: stderr: "" Sep 15 11:51:54.171: INFO: stdout: "I" Sep 15 11:51:54.171: INFO: got output "I" STEP: exposing timestamps Sep 15 11:51:54.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4523 --tail=1 --timestamps' Sep 15 11:51:54.277: INFO: stderr: "" Sep 15 11:51:54.277: INFO: stdout: "2020-09-15T11:51:54.272048714Z I0915 11:51:54.271868 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/sxn 370\n" Sep 15 11:51:54.277: INFO: got output "2020-09-15T11:51:54.272048714Z I0915 11:51:54.271868 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/sxn 370\n" STEP: restricting to a time range Sep 15 11:51:56.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4523 --since=1s' Sep 15 11:51:56.905: INFO: stderr: "" Sep 15 11:51:56.905: INFO: stdout: "I0915 11:51:56.071934 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/bjh5 464\nI0915 11:51:56.271900 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/qlv 329\nI0915 11:51:56.471892 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/8sp5 201\nI0915 11:51:56.671913 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/mmnv 430\nI0915 11:51:56.871890 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/cngq 462\n" Sep 15 11:51:56.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4523 --since=24h' Sep 15 11:51:57.006: INFO: stderr: "" Sep 15 11:51:57.006: INFO: stdout: "I0915 11:51:52.271737 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/4mm4 433\nI0915 11:51:52.471879 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/brd 254\nI0915 11:51:52.671932 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/g24 346\nI0915 11:51:52.871893 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/mg47 512\nI0915 11:51:53.071875 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/ktcm 267\nI0915 11:51:53.271903 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/6ws 387\nI0915 11:51:53.471917 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/p7lf 506\nI0915 11:51:53.671851 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/htj6 552\nI0915 11:51:53.871849 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/7qx 458\nI0915 11:51:54.071911 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/k24 409\nI0915 11:51:54.271868 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/sxn 370\nI0915 11:51:54.471882 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/wq5 318\nI0915 11:51:54.671922 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/jcpq 389\nI0915 11:51:54.871881 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/5tkr 538\nI0915 11:51:55.071893 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/frrh 321\nI0915 11:51:55.271912 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/m2z6 560\nI0915 11:51:55.471906 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/bfr 580\nI0915 11:51:55.671916 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/69gk 268\nI0915 11:51:55.871947 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/xfz 229\nI0915 11:51:56.071934 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/bjh5 464\nI0915 11:51:56.271900 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/qlv 329\nI0915 11:51:56.471892 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/8sp5 201\nI0915 11:51:56.671913 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/mmnv 430\nI0915 11:51:56.871890 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/cngq 462\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Sep 15 11:51:57.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4523' Sep 15 11:51:59.885: INFO: stderr: "" Sep 15 11:51:59.885: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:51:59.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4523" for this suite. • [SLOW TEST:10.347 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":257,"skipped":4319,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:51:59.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-aba8fb08-00d1-4bb8-a8e1-e13264d99380 STEP: Creating a pod to test consume secrets Sep 15 11:52:00.009: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cab9f3b7-f6e8-4ba1-be5f-093db76e4422" in namespace "projected-3402" to be "Succeeded or Failed" Sep 15 11:52:00.048: INFO: Pod "pod-projected-secrets-cab9f3b7-f6e8-4ba1-be5f-093db76e4422": Phase="Pending", Reason="", readiness=false. Elapsed: 38.867118ms Sep 15 11:52:02.052: INFO: Pod "pod-projected-secrets-cab9f3b7-f6e8-4ba1-be5f-093db76e4422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043236036s Sep 15 11:52:04.056: INFO: Pod "pod-projected-secrets-cab9f3b7-f6e8-4ba1-be5f-093db76e4422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047249316s STEP: Saw pod success Sep 15 11:52:04.056: INFO: Pod "pod-projected-secrets-cab9f3b7-f6e8-4ba1-be5f-093db76e4422" satisfied condition "Succeeded or Failed" Sep 15 11:52:04.059: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-cab9f3b7-f6e8-4ba1-be5f-093db76e4422 container projected-secret-volume-test: STEP: delete the pod Sep 15 11:52:04.583: INFO: Waiting for pod pod-projected-secrets-cab9f3b7-f6e8-4ba1-be5f-093db76e4422 to disappear Sep 15 11:52:04.629: INFO: Pod pod-projected-secrets-cab9f3b7-f6e8-4ba1-be5f-093db76e4422 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:52:04.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3402" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":258,"skipped":4324,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:52:04.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:52:05.961: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:52:07.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767525, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767525, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767526, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767525, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:52:10.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767525, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767525, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767526, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767525, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:52:12.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767525, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767525, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767526, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767525, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:52:15.093: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 11:52:15.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5755-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:52:16.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5099" for this suite. STEP: Destroying namespace "webhook-5099-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.769 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":259,"skipped":4339,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:52:16.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-4d3c7eda-b7c8-494b-93f4-dfa19bba054d STEP: Creating secret with name s-test-opt-upd-8c221774-87bc-4680-addc-be7922d1e5b2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4d3c7eda-b7c8-494b-93f4-dfa19bba054d STEP: Updating secret s-test-opt-upd-8c221774-87bc-4680-addc-be7922d1e5b2 STEP: Creating secret with name s-test-opt-create-c2106615-31c9-4fa3-93cd-553c42d2c41c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:52:31.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4623" for this suite. • [SLOW TEST:14.855 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4353,"failed":0} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:52:31.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-3904 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3904 to expose endpoints map[] Sep 15 11:52:32.114: INFO: successfully validated that service multi-endpoint-test in namespace services-3904 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3904 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3904 to expose endpoints map[pod1:[100]] Sep 15 11:52:36.399: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]], will retry Sep 15 11:52:38.367: INFO: successfully validated that service multi-endpoint-test in namespace services-3904 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-3904 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3904 to expose endpoints map[pod1:[100] pod2:[101]] Sep 15 11:52:43.751: INFO: Unexpected endpoints: found map[911a80be-dac0-4c03-be2d-f4c54a072a92:[100]], expected map[pod1:[100] pod2:[101]], will retry Sep 15 11:52:48.809: INFO: Unexpected endpoints: found map[911a80be-dac0-4c03-be2d-f4c54a072a92:[100]], expected map[pod1:[100] pod2:[101]], will retry Sep 15 11:52:50.791: INFO: successfully validated that service multi-endpoint-test in namespace services-3904 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-3904 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3904 to expose endpoints map[pod2:[101]] Sep 15 11:52:50.827: INFO: successfully validated that service multi-endpoint-test in namespace services-3904 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-3904 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3904 to expose endpoints map[] Sep 15 11:52:52.333: INFO: successfully validated that service multi-endpoint-test in namespace services-3904 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:52:52.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3904" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:21.253 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":261,"skipped":4355,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:52:52.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4065 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 15 11:52:52.611: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 15 11:52:53.159: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:52:55.162: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:52:57.162: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 15 11:52:59.195: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:53:01.161: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:53:03.177: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:53:05.162: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:53:07.162: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:53:09.162: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:53:11.162: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:53:13.163: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:53:15.162: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 15 11:53:17.162: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 15 11:53:17.167: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 15 11:53:25.245: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.210 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4065 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:53:25.245: INFO: >>> kubeConfig: /root/.kube/config I0915 11:53:25.270406 7 log.go:181] (0xc00041b080) (0xc00111edc0) Create stream I0915 11:53:25.270427 7 log.go:181] (0xc00041b080) (0xc00111edc0) Stream added, broadcasting: 1 I0915 11:53:25.271538 7 log.go:181] (0xc00041b080) Reply frame received for 1 I0915 11:53:25.271563 7 log.go:181] (0xc00041b080) (0xc00111ef00) Create stream I0915 11:53:25.271575 7 log.go:181] (0xc00041b080) (0xc00111ef00) Stream added, broadcasting: 3 I0915 11:53:25.272190 7 log.go:181] (0xc00041b080) Reply frame received for 3 I0915 11:53:25.272212 7 log.go:181] (0xc00041b080) (0xc0025d1cc0) Create stream I0915 11:53:25.272221 7 log.go:181] (0xc00041b080) (0xc0025d1cc0) Stream added, broadcasting: 5 I0915 11:53:25.272743 7 log.go:181] (0xc00041b080) Reply frame received for 5 I0915 11:53:26.324941 7 log.go:181] (0xc00041b080) Data frame received for 3 I0915 11:53:26.324961 7 log.go:181] (0xc00111ef00) (3) Data frame handling I0915 11:53:26.324967 7 log.go:181] (0xc00111ef00) (3) Data frame sent I0915 11:53:26.324973 7 log.go:181] (0xc00041b080) Data frame received for 3 I0915 11:53:26.324978 7 log.go:181] (0xc00111ef00) (3) Data frame handling I0915 11:53:26.324993 7 log.go:181] (0xc00041b080) Data frame received for 5 I0915 11:53:26.324999 7 log.go:181] (0xc0025d1cc0) (5) Data frame handling I0915 11:53:26.326436 7 log.go:181] (0xc00041b080) Data frame received for 1 I0915 11:53:26.326451 7 log.go:181] (0xc00111edc0) (1) Data frame handling I0915 11:53:26.326463 7 log.go:181] (0xc00111edc0) (1) Data frame sent I0915 11:53:26.326522 7 log.go:181] (0xc00041b080) (0xc00111edc0) Stream removed, broadcasting: 1 I0915 11:53:26.326594 7 log.go:181] (0xc00041b080) (0xc00111edc0) Stream removed, broadcasting: 1 I0915 11:53:26.326608 7 log.go:181] (0xc00041b080) (0xc00111ef00) Stream removed, broadcasting: 3 I0915 11:53:26.326675 7 log.go:181] (0xc00041b080) Go away received I0915 11:53:26.326707 7 log.go:181] (0xc00041b080) (0xc0025d1cc0) Stream removed, broadcasting: 5 Sep 15 11:53:26.326: INFO: Found all expected endpoints: [netserver-0] Sep 15 11:53:26.329: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.222 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4065 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 15 11:53:26.329: INFO: >>> kubeConfig: /root/.kube/config I0915 11:53:26.362944 7 log.go:181] (0xc002f9a580) (0xc0042a6c80) Create stream I0915 11:53:26.362975 7 log.go:181] (0xc002f9a580) (0xc0042a6c80) Stream added, broadcasting: 1 I0915 11:53:26.364535 7 log.go:181] (0xc002f9a580) Reply frame received for 1 I0915 11:53:26.364560 7 log.go:181] (0xc002f9a580) (0xc00111f180) Create stream I0915 11:53:26.364571 7 log.go:181] (0xc002f9a580) (0xc00111f180) Stream added, broadcasting: 3 I0915 11:53:26.365376 7 log.go:181] (0xc002f9a580) Reply frame received for 3 I0915 11:53:26.365407 7 log.go:181] (0xc002f9a580) (0xc003e31f40) Create stream I0915 11:53:26.365419 7 log.go:181] (0xc002f9a580) (0xc003e31f40) Stream added, broadcasting: 5 I0915 11:53:26.366146 7 log.go:181] (0xc002f9a580) Reply frame received for 5 I0915 11:53:27.441111 7 log.go:181] (0xc002f9a580) Data frame received for 5 I0915 11:53:27.441162 7 log.go:181] (0xc003e31f40) (5) Data frame handling I0915 11:53:27.441197 7 log.go:181] (0xc002f9a580) Data frame received for 3 I0915 11:53:27.441221 7 log.go:181] (0xc00111f180) (3) Data frame handling I0915 11:53:27.441239 7 log.go:181] (0xc00111f180) (3) Data frame sent I0915 11:53:27.441246 7 log.go:181] (0xc002f9a580) Data frame received for 3 I0915 11:53:27.441253 7 log.go:181] (0xc00111f180) (3) Data frame handling I0915 11:53:27.443769 7 log.go:181] (0xc002f9a580) Data frame received for 1 I0915 11:53:27.443780 7 log.go:181] (0xc0042a6c80) (1) Data frame handling I0915 11:53:27.443786 7 log.go:181] (0xc0042a6c80) (1) Data frame sent I0915 11:53:27.443795 7 log.go:181] (0xc002f9a580) (0xc0042a6c80) Stream removed, broadcasting: 1 I0915 11:53:27.443842 7 log.go:181] (0xc002f9a580) (0xc0042a6c80) Stream removed, broadcasting: 1 I0915 11:53:27.443865 7 log.go:181] (0xc002f9a580) Go away received I0915 11:53:27.443895 7 log.go:181] (0xc002f9a580) (0xc00111f180) Stream removed, broadcasting: 3 I0915 11:53:27.443905 7 log.go:181] (0xc002f9a580) (0xc003e31f40) Stream removed, broadcasting: 5 Sep 15 11:53:27.443: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:53:27.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4065" for this suite. • [SLOW TEST:34.888 seconds] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4376,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:53:27.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4622 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4622 I0915 11:53:27.632665 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4622, replica count: 2 I0915 11:53:30.683058 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 11:53:33.683215 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 11:53:36.683401 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 11:53:36.683: INFO: Creating new exec pod Sep 15 11:53:41.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-4622 execpodlc72t -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 15 11:53:42.050: INFO: stderr: "I0915 11:53:41.946290 3294 log.go:181] (0xc000fb6dc0) (0xc0005a2d20) Create stream\nI0915 11:53:41.946336 3294 log.go:181] (0xc000fb6dc0) (0xc0005a2d20) Stream added, broadcasting: 1\nI0915 11:53:41.950809 3294 log.go:181] (0xc000fb6dc0) Reply frame received for 1\nI0915 11:53:41.950855 3294 log.go:181] (0xc000fb6dc0) (0xc000a88960) Create stream\nI0915 11:53:41.950867 3294 log.go:181] (0xc000fb6dc0) (0xc000a88960) Stream added, broadcasting: 3\nI0915 11:53:41.951519 3294 log.go:181] (0xc000fb6dc0) Reply frame received for 3\nI0915 11:53:41.951537 3294 log.go:181] (0xc000fb6dc0) (0xc000c50000) Create stream\nI0915 11:53:41.951547 3294 log.go:181] (0xc000fb6dc0) (0xc000c50000) Stream added, broadcasting: 5\nI0915 11:53:41.952418 3294 log.go:181] (0xc000fb6dc0) Reply frame received for 5\nI0915 11:53:42.046342 3294 log.go:181] (0xc000fb6dc0) Data frame received for 3\nI0915 11:53:42.046357 3294 log.go:181] (0xc000a88960) (3) Data frame handling\nI0915 11:53:42.046423 3294 log.go:181] (0xc000fb6dc0) Data frame received for 5\nI0915 11:53:42.046431 3294 log.go:181] (0xc000c50000) (5) Data frame handling\nI0915 11:53:42.046444 3294 log.go:181] (0xc000c50000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0915 11:53:42.046478 3294 log.go:181] (0xc000fb6dc0) Data frame received for 5\nI0915 11:53:42.046493 3294 log.go:181] (0xc000c50000) (5) Data frame handling\nI0915 11:53:42.047528 3294 log.go:181] (0xc000fb6dc0) Data frame received for 1\nI0915 11:53:42.047544 3294 log.go:181] (0xc0005a2d20) (1) Data frame handling\nI0915 11:53:42.047559 3294 log.go:181] (0xc0005a2d20) (1) Data frame sent\nI0915 11:53:42.047568 3294 log.go:181] (0xc000fb6dc0) (0xc0005a2d20) Stream removed, broadcasting: 1\nI0915 11:53:42.047617 3294 log.go:181] (0xc000fb6dc0) Go away received\nI0915 11:53:42.047775 3294 log.go:181] (0xc000fb6dc0) (0xc0005a2d20) Stream removed, broadcasting: 1\nI0915 11:53:42.047784 3294 log.go:181] (0xc000fb6dc0) (0xc000a88960) Stream removed, broadcasting: 3\nI0915 11:53:42.047788 3294 log.go:181] (0xc000fb6dc0) (0xc000c50000) Stream removed, broadcasting: 5\n" Sep 15 11:53:42.051: INFO: stdout: "" Sep 15 11:53:42.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-4622 execpodlc72t -- /bin/sh -x -c nc -zv -t -w 2 10.103.50.114 80' Sep 15 11:53:42.218: INFO: stderr: "I0915 11:53:42.161869 3312 log.go:181] (0xc0008c6c60) (0xc000286fa0) Create stream\nI0915 11:53:42.161954 3312 log.go:181] (0xc0008c6c60) (0xc000286fa0) Stream added, broadcasting: 1\nI0915 11:53:42.166109 3312 log.go:181] (0xc0008c6c60) Reply frame received for 1\nI0915 11:53:42.166138 3312 log.go:181] (0xc0008c6c60) (0xc000382320) Create stream\nI0915 11:53:42.166146 3312 log.go:181] (0xc0008c6c60) (0xc000382320) Stream added, broadcasting: 3\nI0915 11:53:42.166748 3312 log.go:181] (0xc0008c6c60) Reply frame received for 3\nI0915 11:53:42.166785 3312 log.go:181] (0xc0008c6c60) (0xc000286140) Create stream\nI0915 11:53:42.166798 3312 log.go:181] (0xc0008c6c60) (0xc000286140) Stream added, broadcasting: 5\nI0915 11:53:42.167391 3312 log.go:181] (0xc0008c6c60) Reply frame received for 5\nI0915 11:53:42.214924 3312 log.go:181] (0xc0008c6c60) Data frame received for 3\nI0915 11:53:42.214969 3312 log.go:181] (0xc000382320) (3) Data frame handling\nI0915 11:53:42.214997 3312 log.go:181] (0xc0008c6c60) Data frame received for 5\nI0915 11:53:42.215009 3312 log.go:181] (0xc000286140) (5) Data frame handling\nI0915 11:53:42.215020 3312 log.go:181] (0xc000286140) (5) Data frame sent\nI0915 11:53:42.215030 3312 log.go:181] (0xc0008c6c60) Data frame received for 5\nI0915 11:53:42.215039 3312 log.go:181] (0xc000286140) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.50.114 80\nConnection to 10.103.50.114 80 port [tcp/http] succeeded!\nI0915 11:53:42.216041 3312 log.go:181] (0xc0008c6c60) Data frame received for 1\nI0915 11:53:42.216058 3312 log.go:181] (0xc000286fa0) (1) Data frame handling\nI0915 11:53:42.216064 3312 log.go:181] (0xc000286fa0) (1) Data frame sent\nI0915 11:53:42.216074 3312 log.go:181] (0xc0008c6c60) (0xc000286fa0) Stream removed, broadcasting: 1\nI0915 11:53:42.216118 3312 log.go:181] (0xc0008c6c60) Go away received\nI0915 11:53:42.216335 3312 log.go:181] (0xc0008c6c60) (0xc000286fa0) Stream removed, broadcasting: 1\nI0915 11:53:42.216346 3312 log.go:181] (0xc0008c6c60) (0xc000382320) Stream removed, broadcasting: 3\nI0915 11:53:42.216351 3312 log.go:181] (0xc0008c6c60) (0xc000286140) Stream removed, broadcasting: 5\n" Sep 15 11:53:42.219: INFO: stdout: "" Sep 15 11:53:42.219: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:53:42.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4622" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:14.821 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":263,"skipped":4393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:53:42.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-f094de4d-3fd9-4a30-affb-d7130a65a1d3 Sep 15 11:53:42.363: INFO: Pod name my-hostname-basic-f094de4d-3fd9-4a30-affb-d7130a65a1d3: Found 0 pods out of 1 Sep 15 11:53:47.367: INFO: Pod name my-hostname-basic-f094de4d-3fd9-4a30-affb-d7130a65a1d3: Found 1 pods out of 1 Sep 15 11:53:47.367: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f094de4d-3fd9-4a30-affb-d7130a65a1d3" are running Sep 15 11:53:47.373: INFO: Pod "my-hostname-basic-f094de4d-3fd9-4a30-affb-d7130a65a1d3-9h76m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-15 11:53:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-15 11:53:46 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-15 11:53:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-15 11:53:42 +0000 UTC Reason: Message:}]) Sep 15 11:53:47.373: INFO: Trying to dial the pod Sep 15 11:53:52.384: INFO: Controller my-hostname-basic-f094de4d-3fd9-4a30-affb-d7130a65a1d3: Got expected result from replica 1 [my-hostname-basic-f094de4d-3fd9-4a30-affb-d7130a65a1d3-9h76m]: "my-hostname-basic-f094de4d-3fd9-4a30-affb-d7130a65a1d3-9h76m", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:53:52.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3599" for this suite. • [SLOW TEST:10.117 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":264,"skipped":4420,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:53:52.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:53:52.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3758" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":265,"skipped":4432,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:53:52.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:54:00.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-861" for this suite. • [SLOW TEST:8.164 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4435,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:54:00.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:54:01.781: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:54:03.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767641, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767641, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767641, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767641, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:54:05.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767641, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767641, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767641, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767641, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:54:08.829: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:54:09.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8834" for this suite. STEP: Destroying namespace "webhook-8834-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.120 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":267,"skipped":4442,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:54:09.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 11:54:10.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf5e7b82-e3c7-406d-b087-f3d2a0b03a50" in namespace "projected-5655" to be "Succeeded or Failed" Sep 15 11:54:10.022: INFO: Pod "downwardapi-volume-bf5e7b82-e3c7-406d-b087-f3d2a0b03a50": Phase="Pending", Reason="", readiness=false. Elapsed: 5.38796ms Sep 15 11:54:12.250: INFO: Pod "downwardapi-volume-bf5e7b82-e3c7-406d-b087-f3d2a0b03a50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233384572s Sep 15 11:54:14.253: INFO: Pod "downwardapi-volume-bf5e7b82-e3c7-406d-b087-f3d2a0b03a50": Phase="Running", Reason="", readiness=true. Elapsed: 4.237150561s Sep 15 11:54:16.257: INFO: Pod "downwardapi-volume-bf5e7b82-e3c7-406d-b087-f3d2a0b03a50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24087216s STEP: Saw pod success Sep 15 11:54:16.257: INFO: Pod "downwardapi-volume-bf5e7b82-e3c7-406d-b087-f3d2a0b03a50" satisfied condition "Succeeded or Failed" Sep 15 11:54:16.259: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-bf5e7b82-e3c7-406d-b087-f3d2a0b03a50 container client-container: STEP: delete the pod Sep 15 11:54:16.374: INFO: Waiting for pod downwardapi-volume-bf5e7b82-e3c7-406d-b087-f3d2a0b03a50 to disappear Sep 15 11:54:16.397: INFO: Pod downwardapi-volume-bf5e7b82-e3c7-406d-b087-f3d2a0b03a50 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:54:16.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5655" for this suite. • [SLOW TEST:6.475 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":268,"skipped":4447,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:54:16.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0915 11:54:17.820029 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 15 11:55:21.150: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:55:21.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2860" for this suite. • [SLOW TEST:64.754 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":269,"skipped":4460,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:55:21.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 11:55:22.088: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 11:55:24.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:55:26.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:55:28.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:55:30.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 11:55:32.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735767722, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 11:55:35.141: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:55:35.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6224" for this suite. STEP: Destroying namespace "webhook-6224-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.337 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":270,"skipped":4460,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:55:35.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-c88bc6d7-725c-4577-8d17-c5cd9d323acf in namespace container-probe-9229 Sep 15 11:55:41.672: INFO: Started pod liveness-c88bc6d7-725c-4577-8d17-c5cd9d323acf in namespace container-probe-9229 STEP: checking the pod's current state and verifying that restartCount is present Sep 15 11:55:41.675: INFO: Initial restart count of pod liveness-c88bc6d7-725c-4577-8d17-c5cd9d323acf is 0 Sep 15 11:56:04.823: INFO: Restart count of pod container-probe-9229/liveness-c88bc6d7-725c-4577-8d17-c5cd9d323acf is now 1 (23.147971234s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:56:04.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9229" for this suite. • [SLOW TEST:29.421 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:56:04.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 11:56:06.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7588" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":272,"skipped":4493,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 11:56:06.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 15 11:56:08.220: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 15 11:56:08.665: INFO: Waiting for terminating namespaces to be deleted... Sep 15 11:56:09.037: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 15 11:56:09.041: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 11:56:09.041: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 11:56:09.041: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 11:56:09.041: INFO: Container kube-proxy ready: true, restart count 0 Sep 15 11:56:09.041: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 15 11:56:09.044: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 11:56:09.044: INFO: Container kindnet-cni ready: true, restart count 0 Sep 15 11:56:09.044: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 15 11:56:09.044: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1ceaf1be-0e12-451e-9f08-98e2704d7611 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-1ceaf1be-0e12-451e-9f08-98e2704d7611 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-1ceaf1be-0e12-451e-9f08-98e2704d7611 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:01:29.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6615" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:323.659 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":273,"skipped":4501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:01:29.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 15 12:01:29.871: INFO: Waiting up to 5m0s for pod "pod-92ca92f7-06a3-4f66-b25b-824430d23872" in namespace "emptydir-2694" to be "Succeeded or Failed" Sep 15 12:01:29.957: INFO: Pod "pod-92ca92f7-06a3-4f66-b25b-824430d23872": Phase="Pending", Reason="", readiness=false. Elapsed: 85.628821ms Sep 15 12:01:32.489: INFO: Pod "pod-92ca92f7-06a3-4f66-b25b-824430d23872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.618025997s Sep 15 12:01:34.976: INFO: Pod "pod-92ca92f7-06a3-4f66-b25b-824430d23872": Phase="Pending", Reason="", readiness=false. Elapsed: 5.104977889s Sep 15 12:01:37.005: INFO: Pod "pod-92ca92f7-06a3-4f66-b25b-824430d23872": Phase="Pending", Reason="", readiness=false. Elapsed: 7.133996907s Sep 15 12:01:39.009: INFO: Pod "pod-92ca92f7-06a3-4f66-b25b-824430d23872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.137246352s STEP: Saw pod success Sep 15 12:01:39.009: INFO: Pod "pod-92ca92f7-06a3-4f66-b25b-824430d23872" satisfied condition "Succeeded or Failed" Sep 15 12:01:39.011: INFO: Trying to get logs from node kali-worker2 pod pod-92ca92f7-06a3-4f66-b25b-824430d23872 container test-container: STEP: delete the pod Sep 15 12:01:39.059: INFO: Waiting for pod pod-92ca92f7-06a3-4f66-b25b-824430d23872 to disappear Sep 15 12:01:39.069: INFO: Pod pod-92ca92f7-06a3-4f66-b25b-824430d23872 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:01:39.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2694" for this suite. • [SLOW TEST:9.394 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":274,"skipped":4533,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:01:39.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 15 12:01:51.273: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 15 12:01:51.291: INFO: Pod pod-with-poststart-http-hook still exists Sep 15 12:01:53.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 15 12:01:53.467: INFO: Pod pod-with-poststart-http-hook still exists Sep 15 12:01:55.291: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 15 12:01:55.295: INFO: Pod pod-with-poststart-http-hook still exists Sep 15 12:01:57.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 15 12:01:57.296: INFO: Pod pod-with-poststart-http-hook still exists Sep 15 12:01:59.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 15 12:01:59.296: INFO: Pod pod-with-poststart-http-hook still exists Sep 15 12:02:01.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 15 12:02:01.295: INFO: Pod pod-with-poststart-http-hook still exists Sep 15 12:02:03.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 15 12:02:03.311: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:02:03.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-766" for this suite. • [SLOW TEST:24.239 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":275,"skipped":4533,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:02:03.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 12:02:03.365: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Sep 15 12:02:05.415: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:02:06.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4735" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":276,"skipped":4535,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:02:06.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3685 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3685;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3685 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3685;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3685.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3685.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3685.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3685.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3685.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3685.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3685.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3685.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3685.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3685.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3685.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 233.0.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.0.233_udp@PTR;check="$$(dig +tcp +noall +answer +search 233.0.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.0.233_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3685 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3685;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3685 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3685;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3685.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3685.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3685.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3685.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3685.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3685.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3685.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3685.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3685.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3685.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3685.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3685.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 233.0.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.0.233_udp@PTR;check="$$(dig +tcp +noall +answer +search 233.0.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.0.233_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 15 12:02:21.928: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.931: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.933: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.935: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.938: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.940: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.942: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.955: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.957: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.959: INFO: Unable to read jessie_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.960: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.962: INFO: Unable to read jessie_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.963: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.965: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.966: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:21.977: INFO: Lookups using dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3685 wheezy_tcp@dns-test-service.dns-3685 wheezy_udp@dns-test-service.dns-3685.svc wheezy_tcp@dns-test-service.dns-3685.svc wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3685 jessie_tcp@dns-test-service.dns-3685 jessie_udp@dns-test-service.dns-3685.svc jessie_tcp@dns-test-service.dns-3685.svc jessie_udp@_http._tcp.dns-test-service.dns-3685.svc jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc] Sep 15 12:02:26.981: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:26.984: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:26.987: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:26.990: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:26.992: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:26.995: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:26.997: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:26.999: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:27.016: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:27.018: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:27.021: INFO: Unable to read jessie_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:27.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:27.025: INFO: Unable to read jessie_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:27.028: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:27.030: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:27.032: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:27.046: INFO: Lookups using dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3685 wheezy_tcp@dns-test-service.dns-3685 wheezy_udp@dns-test-service.dns-3685.svc wheezy_tcp@dns-test-service.dns-3685.svc wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3685 jessie_tcp@dns-test-service.dns-3685 jessie_udp@dns-test-service.dns-3685.svc jessie_tcp@dns-test-service.dns-3685.svc jessie_udp@_http._tcp.dns-test-service.dns-3685.svc jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc] Sep 15 12:02:31.982: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:31.987: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:31.989: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:31.991: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:31.992: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:31.998: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.001: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.003: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.020: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.022: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.025: INFO: Unable to read jessie_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.027: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.029: INFO: Unable to read jessie_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.032: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.034: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.036: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:32.054: INFO: Lookups using dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3685 wheezy_tcp@dns-test-service.dns-3685 wheezy_udp@dns-test-service.dns-3685.svc wheezy_tcp@dns-test-service.dns-3685.svc wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3685 jessie_tcp@dns-test-service.dns-3685 jessie_udp@dns-test-service.dns-3685.svc jessie_tcp@dns-test-service.dns-3685.svc jessie_udp@_http._tcp.dns-test-service.dns-3685.svc jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc] Sep 15 12:02:36.982: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:36.986: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:36.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:36.994: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:36.997: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.001: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.004: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.007: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.027: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.030: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.033: INFO: Unable to read jessie_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.036: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.040: INFO: Unable to read jessie_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.043: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.046: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.050: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:37.068: INFO: Lookups using dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3685 wheezy_tcp@dns-test-service.dns-3685 wheezy_udp@dns-test-service.dns-3685.svc wheezy_tcp@dns-test-service.dns-3685.svc wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3685 jessie_tcp@dns-test-service.dns-3685 jessie_udp@dns-test-service.dns-3685.svc jessie_tcp@dns-test-service.dns-3685.svc jessie_udp@_http._tcp.dns-test-service.dns-3685.svc jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc] Sep 15 12:02:42.418: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.422: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.467: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.470: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.473: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.476: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.479: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.482: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.600: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.604: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.608: INFO: Unable to read jessie_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.612: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.615: INFO: Unable to read jessie_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.619: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.622: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.625: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:42.657: INFO: Lookups using dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3685 wheezy_tcp@dns-test-service.dns-3685 wheezy_udp@dns-test-service.dns-3685.svc wheezy_tcp@dns-test-service.dns-3685.svc wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3685 jessie_tcp@dns-test-service.dns-3685 jessie_udp@dns-test-service.dns-3685.svc jessie_tcp@dns-test-service.dns-3685.svc jessie_udp@_http._tcp.dns-test-service.dns-3685.svc jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc] Sep 15 12:02:46.982: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:46.986: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:46.989: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:46.991: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:46.993: INFO: Unable to read wheezy_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:46.995: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:46.997: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:46.999: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:47.013: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:47.016: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:47.018: INFO: Unable to read jessie_udp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:47.020: INFO: Unable to read jessie_tcp@dns-test-service.dns-3685 from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:47.023: INFO: Unable to read jessie_udp@dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:47.028: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:47.031: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc from pod dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571: the server could not find the requested resource (get pods dns-test-ccae2187-701e-425c-8b73-2f6894b89571) Sep 15 12:02:47.047: INFO: Lookups using dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3685 wheezy_tcp@dns-test-service.dns-3685 wheezy_udp@dns-test-service.dns-3685.svc wheezy_tcp@dns-test-service.dns-3685.svc wheezy_udp@_http._tcp.dns-test-service.dns-3685.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3685.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3685 jessie_tcp@dns-test-service.dns-3685 jessie_udp@dns-test-service.dns-3685.svc jessie_udp@_http._tcp.dns-test-service.dns-3685.svc jessie_tcp@_http._tcp.dns-test-service.dns-3685.svc] Sep 15 12:02:52.063: INFO: DNS probes using dns-3685/dns-test-ccae2187-701e-425c-8b73-2f6894b89571 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:02:57.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3685" for this suite. • [SLOW TEST:51.111 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":277,"skipped":4552,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:02:57.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-f9d65 in namespace proxy-5482 I0915 12:02:58.500418 7 runners.go:190] Created replication controller with name: proxy-service-f9d65, namespace: proxy-5482, replica count: 1 I0915 12:02:59.550801 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 12:03:00.551014 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 12:03:01.551201 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 12:03:02.551360 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 12:03:03.551513 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 12:03:04.551702 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0915 12:03:05.551935 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0915 12:03:06.552309 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0915 12:03:07.552461 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0915 12:03:08.552621 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0915 12:03:09.552825 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0915 12:03:10.553060 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0915 12:03:11.553334 7 runners.go:190] proxy-service-f9d65 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 12:03:11.690: INFO: setup took 13.496846529s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Sep 15 12:03:11.698: INFO: (0) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 7.995869ms) Sep 15 12:03:11.698: INFO: (0) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 8.214907ms) Sep 15 12:03:11.703: INFO: (0) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 13.427966ms) Sep 15 12:03:11.703: INFO: (0) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 13.350533ms) Sep 15 12:03:11.703: INFO: (0) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 13.545082ms) Sep 15 12:03:11.703: INFO: (0) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 13.476597ms) Sep 15 12:03:11.703: INFO: (0) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 13.469792ms) Sep 15 12:03:11.703: INFO: (0) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 13.4585ms) Sep 15 12:03:11.703: INFO: (0) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 13.444375ms) Sep 15 12:03:11.703: INFO: (0) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 13.413523ms) Sep 15 12:03:11.704: INFO: (0) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 14.004773ms) Sep 15 12:03:11.705: INFO: (0) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 14.656637ms) Sep 15 12:03:11.705: INFO: (0) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 14.643491ms) Sep 15 12:03:11.707: INFO: (0) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 17.658633ms) Sep 15 12:03:11.707: INFO: (0) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 17.486589ms) Sep 15 12:03:11.708: INFO: (0) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test (200; 17.891632ms) Sep 15 12:03:11.726: INFO: (1) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 17.83114ms) Sep 15 12:03:11.726: INFO: (1) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 17.859232ms) Sep 15 12:03:11.726: INFO: (1) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 17.864808ms) Sep 15 12:03:11.727: INFO: (1) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 18.158561ms) Sep 15 12:03:11.727: INFO: (1) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 18.262809ms) Sep 15 12:03:11.727: INFO: (1) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 18.220527ms) Sep 15 12:03:11.727: INFO: (1) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 18.856952ms) Sep 15 12:03:11.760: INFO: (2) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 32.941725ms) Sep 15 12:03:11.760: INFO: (2) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 33.047033ms) Sep 15 12:03:11.760: INFO: (2) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 33.009658ms) Sep 15 12:03:11.761: INFO: (2) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 33.252543ms) Sep 15 12:03:11.761: INFO: (2) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 33.392849ms) Sep 15 12:03:11.762: INFO: (2) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test (200; 34.482481ms) Sep 15 12:03:11.762: INFO: (2) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 34.692422ms) Sep 15 12:03:11.762: INFO: (2) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 34.857662ms) Sep 15 12:03:11.762: INFO: (2) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 34.876874ms) Sep 15 12:03:11.763: INFO: (2) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 35.548377ms) Sep 15 12:03:11.763: INFO: (2) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 35.459977ms) Sep 15 12:03:11.763: INFO: (2) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 35.556502ms) Sep 15 12:03:11.763: INFO: (2) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 35.609608ms) Sep 15 12:03:11.763: INFO: (2) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 35.76879ms) Sep 15 12:03:11.766: INFO: (3) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test<... (200; 5.735426ms) Sep 15 12:03:11.769: INFO: (3) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 5.794456ms) Sep 15 12:03:11.769: INFO: (3) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 5.921412ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 6.364213ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 6.467733ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 6.377285ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 6.541578ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 6.474868ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 6.495173ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 6.377182ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 6.382288ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 6.386585ms) Sep 15 12:03:11.770: INFO: (3) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 6.833853ms) Sep 15 12:03:11.772: INFO: (4) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 1.956368ms) Sep 15 12:03:11.773: INFO: (4) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 2.487559ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 3.697561ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.811994ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 3.887259ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 3.860464ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 3.887531ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 3.888363ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 3.848223ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 4.118914ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 4.153728ms) Sep 15 12:03:11.774: INFO: (4) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 4.17104ms) Sep 15 12:03:11.775: INFO: (4) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 4.311887ms) Sep 15 12:03:11.775: INFO: (4) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test<... (200; 4.484933ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 2.727845ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 2.696247ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 2.755563ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 2.756593ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 2.877197ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 2.81154ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 2.856424ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test (200; 2.870858ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 3.574817ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 3.652049ms) Sep 15 12:03:11.778: INFO: (5) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 3.592218ms) Sep 15 12:03:11.779: INFO: (5) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 3.718355ms) Sep 15 12:03:11.779: INFO: (5) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 3.818453ms) Sep 15 12:03:11.779: INFO: (5) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 3.889819ms) Sep 15 12:03:11.785: INFO: (6) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test<... (200; 6.636757ms) Sep 15 12:03:11.787: INFO: (6) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 8.668402ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 8.838987ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 8.897592ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 8.942067ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 9.02638ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 9.085976ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 9.02706ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 8.993125ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 9.440497ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 9.414223ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 9.403164ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 9.395747ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 9.413153ms) Sep 15 12:03:11.788: INFO: (6) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 9.436015ms) Sep 15 12:03:11.828: INFO: (7) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 39.369343ms) Sep 15 12:03:11.828: INFO: (7) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 39.300709ms) Sep 15 12:03:11.828: INFO: (7) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 39.364091ms) Sep 15 12:03:11.828: INFO: (7) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 39.784695ms) Sep 15 12:03:11.828: INFO: (7) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 40.08382ms) Sep 15 12:03:11.828: INFO: (7) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 40.179593ms) Sep 15 12:03:11.829: INFO: (7) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 40.30388ms) Sep 15 12:03:11.829: INFO: (7) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test (200; 41.469946ms) Sep 15 12:03:11.830: INFO: (7) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 41.383175ms) Sep 15 12:03:11.830: INFO: (7) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 41.401643ms) Sep 15 12:03:11.830: INFO: (7) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 41.483463ms) Sep 15 12:03:11.830: INFO: (7) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 41.751404ms) Sep 15 12:03:11.835: INFO: (8) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 4.424639ms) Sep 15 12:03:11.835: INFO: (8) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test<... (200; 6.114429ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 6.585163ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 6.544914ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 6.575654ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 6.937584ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 6.977823ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 7.060115ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 6.987848ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 7.088914ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 7.14972ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 7.190621ms) Sep 15 12:03:11.837: INFO: (8) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 7.270379ms) Sep 15 12:03:11.838: INFO: (8) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 7.600649ms) Sep 15 12:03:11.840: INFO: (9) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 2.103682ms) Sep 15 12:03:11.840: INFO: (9) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test<... (200; 2.756096ms) Sep 15 12:03:11.841: INFO: (9) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 3.184833ms) Sep 15 12:03:11.841: INFO: (9) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 3.26864ms) Sep 15 12:03:11.841: INFO: (9) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.279521ms) Sep 15 12:03:11.841: INFO: (9) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 3.336173ms) Sep 15 12:03:11.841: INFO: (9) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 3.379926ms) Sep 15 12:03:11.841: INFO: (9) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 3.465025ms) Sep 15 12:03:11.841: INFO: (9) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.440382ms) Sep 15 12:03:11.842: INFO: (9) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 4.800756ms) Sep 15 12:03:11.843: INFO: (9) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 4.861766ms) Sep 15 12:03:11.843: INFO: (9) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 4.908362ms) Sep 15 12:03:11.843: INFO: (9) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 4.945628ms) Sep 15 12:03:11.843: INFO: (9) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 4.95871ms) Sep 15 12:03:11.843: INFO: (9) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 4.949115ms) Sep 15 12:03:11.845: INFO: (10) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 2.673896ms) Sep 15 12:03:11.846: INFO: (10) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 2.831713ms) Sep 15 12:03:11.847: INFO: (10) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 3.812911ms) Sep 15 12:03:11.847: INFO: (10) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 3.830296ms) Sep 15 12:03:11.847: INFO: (10) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.855038ms) Sep 15 12:03:11.847: INFO: (10) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.837391ms) Sep 15 12:03:11.847: INFO: (10) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 3.826366ms) Sep 15 12:03:11.847: INFO: (10) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test (200; 3.250937ms) Sep 15 12:03:11.856: INFO: (11) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.366775ms) Sep 15 12:03:11.857: INFO: (11) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 3.365645ms) Sep 15 12:03:11.857: INFO: (11) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.499493ms) Sep 15 12:03:11.857: INFO: (11) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 3.484138ms) Sep 15 12:03:11.857: INFO: (11) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 3.473099ms) Sep 15 12:03:11.858: INFO: (11) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 4.350045ms) Sep 15 12:03:11.858: INFO: (11) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 4.82884ms) Sep 15 12:03:11.858: INFO: (11) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 4.848528ms) Sep 15 12:03:11.858: INFO: (11) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 4.870662ms) Sep 15 12:03:11.858: INFO: (11) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 5.094903ms) Sep 15 12:03:11.858: INFO: (11) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 5.150938ms) Sep 15 12:03:11.858: INFO: (11) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test (200; 4.834093ms) Sep 15 12:03:11.865: INFO: (12) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 4.867659ms) Sep 15 12:03:11.865: INFO: (12) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 5.082138ms) Sep 15 12:03:11.865: INFO: (12) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 5.117843ms) Sep 15 12:03:11.865: INFO: (12) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 5.118808ms) Sep 15 12:03:11.868: INFO: (13) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test<... (200; 3.789045ms) Sep 15 12:03:11.869: INFO: (13) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 3.905042ms) Sep 15 12:03:11.869: INFO: (13) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 4.217618ms) Sep 15 12:03:11.869: INFO: (13) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 4.161975ms) Sep 15 12:03:11.872: INFO: (13) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 7.157843ms) Sep 15 12:03:11.873: INFO: (13) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 7.620169ms) Sep 15 12:03:11.873: INFO: (13) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 7.612683ms) Sep 15 12:03:11.873: INFO: (13) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 7.963679ms) Sep 15 12:03:11.873: INFO: (13) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 8.215017ms) Sep 15 12:03:11.873: INFO: (13) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 8.274684ms) Sep 15 12:03:11.873: INFO: (13) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 8.257384ms) Sep 15 12:03:11.873: INFO: (13) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 8.272868ms) Sep 15 12:03:11.873: INFO: (13) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 8.325208ms) Sep 15 12:03:11.874: INFO: (13) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 8.260709ms) Sep 15 12:03:11.876: INFO: (14) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 1.959175ms) Sep 15 12:03:11.877: INFO: (14) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 3.872911ms) Sep 15 12:03:11.877: INFO: (14) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 3.919ms) Sep 15 12:03:11.878: INFO: (14) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 4.049787ms) Sep 15 12:03:11.878: INFO: (14) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 4.058984ms) Sep 15 12:03:11.878: INFO: (14) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test (200; 4.375296ms) Sep 15 12:03:11.878: INFO: (14) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 4.396649ms) Sep 15 12:03:11.878: INFO: (14) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 4.392064ms) Sep 15 12:03:11.878: INFO: (14) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 4.471055ms) Sep 15 12:03:11.881: INFO: (15) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 2.92257ms) Sep 15 12:03:11.881: INFO: (15) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 2.971928ms) Sep 15 12:03:11.882: INFO: (15) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 3.74011ms) Sep 15 12:03:11.883: INFO: (15) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 4.523954ms) Sep 15 12:03:11.883: INFO: (15) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 4.543829ms) Sep 15 12:03:11.883: INFO: (15) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test<... (200; 5.178912ms) Sep 15 12:03:11.884: INFO: (15) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 5.388734ms) Sep 15 12:03:11.884: INFO: (15) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 5.456236ms) Sep 15 12:03:11.884: INFO: (15) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 5.383089ms) Sep 15 12:03:11.884: INFO: (15) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 5.477473ms) Sep 15 12:03:11.884: INFO: (15) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 5.397448ms) Sep 15 12:03:11.886: INFO: (16) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 2.616796ms) Sep 15 12:03:11.887: INFO: (16) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 2.945629ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 3.755652ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 3.786453ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.679903ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 3.806297ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 3.844725ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.87079ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 3.842626ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 3.860866ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 4.127451ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 4.115463ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 4.175062ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 4.273775ms) Sep 15 12:03:11.888: INFO: (16) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: ... (200; 3.477002ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.707717ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 3.737041ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 3.76816ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 3.841127ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 3.877552ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 3.908336ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 3.970573ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 4.011328ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 3.953056ms) Sep 15 12:03:11.894: INFO: (17) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 4.004159ms) Sep 15 12:03:11.897: INFO: (18) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 2.54097ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h/proxy/: test (200; 3.083699ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 3.132318ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 3.15655ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 3.6317ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 3.642066ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname2/proxy/: bar (200; 3.699789ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/services/proxy-service-f9d65:portname1/proxy/: foo (200; 3.755673ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:162/proxy/: bar (200; 3.748379ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname2/proxy/: tls qux (200; 3.72534ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test<... (200; 4.020108ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 3.971757ms) Sep 15 12:03:11.898: INFO: (18) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:162/proxy/: bar (200; 4.054133ms) Sep 15 12:03:11.899: INFO: (18) /api/v1/namespaces/proxy-5482/services/https:proxy-service-f9d65:tlsportname1/proxy/: tls baz (200; 4.106342ms) Sep 15 12:03:11.899: INFO: (18) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 4.201656ms) Sep 15 12:03:11.901: INFO: (19) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:160/proxy/: foo (200; 1.957614ms) Sep 15 12:03:11.901: INFO: (19) /api/v1/namespaces/proxy-5482/pods/proxy-service-f9d65-t452h:1080/proxy/: test<... (200; 2.663671ms) Sep 15 12:03:11.902: INFO: (19) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:460/proxy/: tls baz (200; 2.826923ms) Sep 15 12:03:11.902: INFO: (19) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:1080/proxy/: ... (200; 3.02222ms) Sep 15 12:03:11.902: INFO: (19) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname1/proxy/: foo (200; 3.543526ms) Sep 15 12:03:11.902: INFO: (19) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:443/proxy/: test (200; 6.439335ms) Sep 15 12:03:11.905: INFO: (19) /api/v1/namespaces/proxy-5482/pods/https:proxy-service-f9d65-t452h:462/proxy/: tls qux (200; 6.505741ms) Sep 15 12:03:11.905: INFO: (19) /api/v1/namespaces/proxy-5482/services/http:proxy-service-f9d65:portname2/proxy/: bar (200; 6.489008ms) Sep 15 12:03:11.908: INFO: (19) /api/v1/namespaces/proxy-5482/pods/http:proxy-service-f9d65-t452h:160/proxy/: foo (200; 8.873151ms) STEP: deleting ReplicationController proxy-service-f9d65 in namespace proxy-5482, will wait for the garbage collector to delete the pods Sep 15 12:03:11.967: INFO: Deleting ReplicationController proxy-service-f9d65 took: 6.750592ms Sep 15 12:03:12.467: INFO: Terminating ReplicationController proxy-service-f9d65 pods took: 500.176492ms [AfterEach] version v1 /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:03:23.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5482" for this suite. • [SLOW TEST:25.632 seconds] [sig-network] Proxy /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":278,"skipped":4555,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:03:23.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:03:24.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9812" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":279,"skipped":4566,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:03:24.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Sep 15 12:03:24.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config cluster-info' Sep 15 12:03:30.929: INFO: stderr: "" Sep 15 12:03:30.929: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:46255\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:46255/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:03:30.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1606" for this suite. • [SLOW TEST:6.675 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl cluster-info /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1079 should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":280,"skipped":4573,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:03:30.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 12:03:31.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c" in namespace "projected-479" to be "Succeeded or Failed" Sep 15 12:03:31.060: INFO: Pod "downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.370113ms Sep 15 12:03:33.064: INFO: Pod "downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043585587s Sep 15 12:03:35.067: INFO: Pod "downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046826087s Sep 15 12:03:37.070: INFO: Pod "downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c": Phase="Running", Reason="", readiness=true. Elapsed: 6.049758013s Sep 15 12:03:39.150: INFO: Pod "downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.129129946s STEP: Saw pod success Sep 15 12:03:39.150: INFO: Pod "downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c" satisfied condition "Succeeded or Failed" Sep 15 12:03:39.259: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c container client-container: STEP: delete the pod Sep 15 12:03:39.385: INFO: Waiting for pod downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c to disappear Sep 15 12:03:39.401: INFO: Pod downwardapi-volume-7969722d-812f-4e3d-a67c-a41543fda90c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:03:39.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-479" for this suite. • [SLOW TEST:8.472 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":281,"skipped":4580,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:03:39.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:03:55.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2051" for this suite. • [SLOW TEST:16.461 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":282,"skipped":4601,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:03:55.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 12:03:55.930: INFO: Creating deployment "test-recreate-deployment" Sep 15 12:03:55.934: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Sep 15 12:03:55.995: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Sep 15 12:03:58.703: INFO: Waiting deployment "test-recreate-deployment" to complete Sep 15 12:03:58.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:00.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:03.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:06.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:08.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:10.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:10.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:12.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:14.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:16.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768236, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768235, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 15 12:04:18.708: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Sep 15 12:04:18.712: INFO: Updating deployment test-recreate-deployment Sep 15 12:04:18.712: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 15 12:04:20.307: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7446 /apis/apps/v1/namespaces/deployment-7446/deployments/test-recreate-deployment 02742e2f-ff54-4e47-8936-2c5437b18707 460919 2 2020-09-15 12:03:55 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-15 12:04:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-15 12:04:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005d2ffc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-15 12:04:20 +0000 UTC,LastTransitionTime:2020-09-15 12:04:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-09-15 12:04:20 +0000 UTC,LastTransitionTime:2020-09-15 12:03:55 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Sep 15 12:04:20.331: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-7446 /apis/apps/v1/namespaces/deployment-7446/replicasets/test-recreate-deployment-f79dd4667 c2e67fef-92fe-48fb-884c-0556e1705010 460913 1 2020-09-15 12:04:19 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 02742e2f-ff54-4e47-8936-2c5437b18707 0xc003f5c900 0xc003f5c901}] [] [{kube-controller-manager Update apps/v1 2020-09-15 12:04:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02742e2f-ff54-4e47-8936-2c5437b18707\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003f5c978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 15 12:04:20.331: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Sep 15 12:04:20.331: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-7446 /apis/apps/v1/namespaces/deployment-7446/replicasets/test-recreate-deployment-c96cf48f 9d69ec2a-d72f-41e7-84d9-aac591b96bc1 460902 2 2020-09-15 12:03:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 02742e2f-ff54-4e47-8936-2c5437b18707 0xc003f5c80f 0xc003f5c820}] [] [{kube-controller-manager Update apps/v1 2020-09-15 12:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02742e2f-ff54-4e47-8936-2c5437b18707\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003f5c898 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 15 12:04:20.333: INFO: Pod "test-recreate-deployment-f79dd4667-gprr5" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-gprr5 test-recreate-deployment-f79dd4667- deployment-7446 /api/v1/namespaces/deployment-7446/pods/test-recreate-deployment-f79dd4667-gprr5 5a7db179-c12e-49cd-ab36-0ca3ff78ca78 460918 0 2020-09-15 12:04:19 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 c2e67fef-92fe-48fb-884c-0556e1705010 0xc003f5ce30 0xc003f5ce31}] [] [{kube-controller-manager Update v1 2020-09-15 12:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2e67fef-92fe-48fb-884c-0556e1705010\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 12:04:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5r9xf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5r9xf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5r9xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 12:04:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 12:04:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 12:04:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 12:04:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-15 12:04:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:04:20.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7446" for this suite. • [SLOW TEST:24.475 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":283,"skipped":4605,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:04:20.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 15 12:04:29.600: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:04:29.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8051" for this suite. • [SLOW TEST:9.418 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4616,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:04:29.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-a0d7d8b6-4a4a-491d-884b-c4bbe7aa6b02 STEP: Creating a pod to test consume configMaps Sep 15 12:04:31.380: INFO: Waiting up to 5m0s for pod "pod-configmaps-79afe440-bfaf-4666-a48c-f328fed6bbd3" in namespace "configmap-9870" to be "Succeeded or Failed" Sep 15 12:04:31.383: INFO: Pod "pod-configmaps-79afe440-bfaf-4666-a48c-f328fed6bbd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.586776ms Sep 15 12:04:33.385: INFO: Pod "pod-configmaps-79afe440-bfaf-4666-a48c-f328fed6bbd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005351393s Sep 15 12:04:35.526: INFO: Pod "pod-configmaps-79afe440-bfaf-4666-a48c-f328fed6bbd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145903913s Sep 15 12:04:37.529: INFO: Pod "pod-configmaps-79afe440-bfaf-4666-a48c-f328fed6bbd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14941296s STEP: Saw pod success Sep 15 12:04:37.529: INFO: Pod "pod-configmaps-79afe440-bfaf-4666-a48c-f328fed6bbd3" satisfied condition "Succeeded or Failed" Sep 15 12:04:37.532: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-79afe440-bfaf-4666-a48c-f328fed6bbd3 container configmap-volume-test: STEP: delete the pod Sep 15 12:04:37.575: INFO: Waiting for pod pod-configmaps-79afe440-bfaf-4666-a48c-f328fed6bbd3 to disappear Sep 15 12:04:37.579: INFO: Pod pod-configmaps-79afe440-bfaf-4666-a48c-f328fed6bbd3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:04:37.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9870" for this suite. • [SLOW TEST:7.823 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4622,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:04:37.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:04:54.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9660" for this suite. • [SLOW TEST:17.187 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":286,"skipped":4639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:04:54.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Sep 15 12:05:01.746: INFO: 10 pods remaining Sep 15 12:05:01.746: INFO: 10 pods has nil DeletionTimestamp Sep 15 12:05:01.746: INFO: Sep 15 12:05:04.347: INFO: 9 pods remaining Sep 15 12:05:04.347: INFO: 0 pods has nil DeletionTimestamp Sep 15 12:05:04.347: INFO: STEP: Gathering metrics W0915 12:05:04.839585 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 15 12:06:06.914: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:06:06.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7651" for this suite. • [SLOW TEST:72.149 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":287,"skipped":4679,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:06:06.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 12:06:07.059: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb" in namespace "projected-1175" to be "Succeeded or Failed" Sep 15 12:06:07.061: INFO: Pod "downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.646372ms Sep 15 12:06:09.066: INFO: Pod "downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006681985s Sep 15 12:06:11.070: INFO: Pod "downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011134277s Sep 15 12:06:13.507: INFO: Pod "downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb": Phase="Running", Reason="", readiness=true. Elapsed: 6.448211118s Sep 15 12:06:15.513: INFO: Pod "downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.453693502s STEP: Saw pod success Sep 15 12:06:15.513: INFO: Pod "downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb" satisfied condition "Succeeded or Failed" Sep 15 12:06:15.516: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb container client-container: STEP: delete the pod Sep 15 12:06:16.100: INFO: Waiting for pod downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb to disappear Sep 15 12:06:16.313: INFO: Pod downwardapi-volume-bdb10585-3314-4ad7-9ac9-0446d84df3eb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:06:16.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1175" for this suite. • [SLOW TEST:9.446 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4693,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:06:16.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Sep 15 12:06:31.233: INFO: 5 pods remaining Sep 15 12:06:31.233: INFO: 5 pods has nil DeletionTimestamp Sep 15 12:06:31.233: INFO: STEP: Gathering metrics W0915 12:06:35.965740 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 15 12:07:37.983: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 15 12:07:37.983: INFO: Deleting pod "simpletest-rc-to-be-deleted-2md84" in namespace "gc-441" Sep 15 12:07:38.730: INFO: Deleting pod "simpletest-rc-to-be-deleted-478vp" in namespace "gc-441" Sep 15 12:07:39.010: INFO: Deleting pod "simpletest-rc-to-be-deleted-h2xqs" in namespace "gc-441" Sep 15 12:07:39.225: INFO: Deleting pod "simpletest-rc-to-be-deleted-hhq6b" in namespace "gc-441" Sep 15 12:07:39.429: INFO: Deleting pod "simpletest-rc-to-be-deleted-lrlqn" in namespace "gc-441" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:07:39.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-441" for this suite. • [SLOW TEST:83.373 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":289,"skipped":4706,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:07:39.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 15 12:07:52.112: INFO: &Pod{ObjectMeta:{send-events-2c4a919c-d35d-4e05-9742-0b472fc55817 events-6500 /api/v1/namespaces/events-6500/pods/send-events-2c4a919c-d35d-4e05-9742-0b472fc55817 3f49d088-1261-4352-83bb-091ffde5f9d2 461901 0 2020-09-15 12:07:40 +0000 UTC map[name:foo time:918145658] map[] [] [] [{e2e.test Update v1 2020-09-15 12:07:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-15 12:07:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.251\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lwnlw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lwnlw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lwnlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 12:07:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 12:07:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 12:07:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-15 12:07:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.251,StartTime:2020-09-15 12:07:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-15 12:07:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://04924409c69cedff5ef3bc71092eb072a51eee01a1fdb9face4c9e39cb550d4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Sep 15 12:07:54.117: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 15 12:07:56.121: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:07:56.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6500" for this suite. • [SLOW TEST:16.401 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":290,"skipped":4712,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:07:56.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-2017/configmap-test-f1be3016-f207-401b-95a8-0b32f88eac8a STEP: Creating a pod to test consume configMaps Sep 15 12:07:56.246: INFO: Waiting up to 5m0s for pod "pod-configmaps-39dfee2b-332f-406a-9f5e-bad8d649e142" in namespace "configmap-2017" to be "Succeeded or Failed" Sep 15 12:07:56.256: INFO: Pod "pod-configmaps-39dfee2b-332f-406a-9f5e-bad8d649e142": Phase="Pending", Reason="", readiness=false. Elapsed: 9.652417ms Sep 15 12:07:58.260: INFO: Pod "pod-configmaps-39dfee2b-332f-406a-9f5e-bad8d649e142": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013658354s Sep 15 12:08:00.264: INFO: Pod "pod-configmaps-39dfee2b-332f-406a-9f5e-bad8d649e142": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017429519s STEP: Saw pod success Sep 15 12:08:00.264: INFO: Pod "pod-configmaps-39dfee2b-332f-406a-9f5e-bad8d649e142" satisfied condition "Succeeded or Failed" Sep 15 12:08:00.266: INFO: Trying to get logs from node kali-worker pod pod-configmaps-39dfee2b-332f-406a-9f5e-bad8d649e142 container env-test: STEP: delete the pod Sep 15 12:08:00.307: INFO: Waiting for pod pod-configmaps-39dfee2b-332f-406a-9f5e-bad8d649e142 to disappear Sep 15 12:08:00.316: INFO: Pod pod-configmaps-39dfee2b-332f-406a-9f5e-bad8d649e142 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:08:00.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2017" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":291,"skipped":4715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:08:00.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 15 12:08:00.891: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 15 12:08:02.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768480, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768480, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768481, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735768480, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 15 12:08:06.030: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:08:06.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7412" for this suite. STEP: Destroying namespace "webhook-7412-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.829 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":292,"skipped":4759,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:08:06.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-4af2b7b7-d445-4c68-9091-8837d15c0cdb STEP: Creating a pod to test consume configMaps Sep 15 12:08:06.253: INFO: Waiting up to 5m0s for pod "pod-configmaps-193fdf06-bac7-45b5-8599-043e2da92fa0" in namespace "configmap-4409" to be "Succeeded or Failed" Sep 15 12:08:06.270: INFO: Pod "pod-configmaps-193fdf06-bac7-45b5-8599-043e2da92fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.053609ms Sep 15 12:08:08.273: INFO: Pod "pod-configmaps-193fdf06-bac7-45b5-8599-043e2da92fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019931308s Sep 15 12:08:10.297: INFO: Pod "pod-configmaps-193fdf06-bac7-45b5-8599-043e2da92fa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043947221s STEP: Saw pod success Sep 15 12:08:10.298: INFO: Pod "pod-configmaps-193fdf06-bac7-45b5-8599-043e2da92fa0" satisfied condition "Succeeded or Failed" Sep 15 12:08:10.301: INFO: Trying to get logs from node kali-worker pod pod-configmaps-193fdf06-bac7-45b5-8599-043e2da92fa0 container configmap-volume-test: STEP: delete the pod Sep 15 12:08:10.337: INFO: Waiting for pod pod-configmaps-193fdf06-bac7-45b5-8599-043e2da92fa0 to disappear Sep 15 12:08:10.353: INFO: Pod pod-configmaps-193fdf06-bac7-45b5-8599-043e2da92fa0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:08:10.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4409" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":293,"skipped":4772,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:08:10.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 15 12:08:10.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config version' Sep 15 12:08:10.603: INFO: stderr: "" Sep 15 12:08:10.603: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.1\", GitCommit:\"206bcadf021e76c27513500ca24182692aabd17e\", GitTreeState:\"clean\", BuildDate:\"2020-09-09T11:26:42Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:08:10.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4444" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":294,"skipped":4788,"failed":0} S ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:08:10.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2007 Sep 15 12:08:14.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2007 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 15 12:08:14.958: INFO: stderr: "I0915 12:08:14.846558 3365 log.go:181] (0xc000142370) (0xc000c680a0) Create stream\nI0915 12:08:14.846613 3365 log.go:181] (0xc000142370) (0xc000c680a0) Stream added, broadcasting: 1\nI0915 12:08:14.848439 3365 log.go:181] (0xc000142370) Reply frame received for 1\nI0915 12:08:14.848479 3365 log.go:181] (0xc000142370) (0xc000d101e0) Create stream\nI0915 12:08:14.848488 3365 log.go:181] (0xc000142370) (0xc000d101e0) Stream added, broadcasting: 3\nI0915 12:08:14.849239 3365 log.go:181] (0xc000142370) Reply frame received for 3\nI0915 12:08:14.849263 3365 log.go:181] (0xc000142370) (0xc000d10280) Create stream\nI0915 12:08:14.849270 3365 log.go:181] (0xc000142370) (0xc000d10280) Stream added, broadcasting: 5\nI0915 12:08:14.850036 3365 log.go:181] (0xc000142370) Reply frame received for 5\nI0915 12:08:14.946106 3365 log.go:181] (0xc000142370) Data frame received for 5\nI0915 12:08:14.946137 3365 log.go:181] (0xc000d10280) (5) Data frame handling\nI0915 12:08:14.946145 3365 log.go:181] (0xc000d10280) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0915 12:08:14.951165 3365 log.go:181] (0xc000142370) Data frame received for 3\nI0915 12:08:14.951194 3365 log.go:181] (0xc000d101e0) (3) Data frame handling\nI0915 12:08:14.951219 3365 log.go:181] (0xc000d101e0) (3) Data frame sent\nI0915 12:08:14.951788 3365 log.go:181] (0xc000142370) Data frame received for 3\nI0915 12:08:14.951827 3365 log.go:181] (0xc000142370) Data frame received for 5\nI0915 12:08:14.951874 3365 log.go:181] (0xc000d10280) (5) Data frame handling\nI0915 12:08:14.951917 3365 log.go:181] (0xc000d101e0) (3) Data frame handling\nI0915 12:08:14.953596 3365 log.go:181] (0xc000142370) Data frame received for 1\nI0915 12:08:14.953616 3365 log.go:181] (0xc000c680a0) (1) Data frame handling\nI0915 12:08:14.953626 3365 log.go:181] (0xc000c680a0) (1) Data frame sent\nI0915 12:08:14.953636 3365 log.go:181] (0xc000142370) (0xc000c680a0) Stream removed, broadcasting: 1\nI0915 12:08:14.953757 3365 log.go:181] (0xc000142370) Go away received\nI0915 12:08:14.954062 3365 log.go:181] (0xc000142370) (0xc000c680a0) Stream removed, broadcasting: 1\nI0915 12:08:14.954078 3365 log.go:181] (0xc000142370) (0xc000d101e0) Stream removed, broadcasting: 3\nI0915 12:08:14.954086 3365 log.go:181] (0xc000142370) (0xc000d10280) Stream removed, broadcasting: 5\n" Sep 15 12:08:14.958: INFO: stdout: "iptables" Sep 15 12:08:14.958: INFO: proxyMode: iptables Sep 15 12:08:14.963: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 12:08:14.982: INFO: Pod kube-proxy-mode-detector still exists Sep 15 12:08:16.982: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 12:08:16.988: INFO: Pod kube-proxy-mode-detector still exists Sep 15 12:08:18.982: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 12:08:18.986: INFO: Pod kube-proxy-mode-detector still exists Sep 15 12:08:20.982: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 12:08:20.987: INFO: Pod kube-proxy-mode-detector still exists Sep 15 12:08:22.982: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 12:08:22.986: INFO: Pod kube-proxy-mode-detector still exists Sep 15 12:08:24.982: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 15 12:08:24.986: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-2007 STEP: creating replication controller affinity-nodeport-timeout in namespace services-2007 I0915 12:08:25.063561 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-2007, replica count: 3 I0915 12:08:28.114156 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 12:08:31.114353 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0915 12:08:34.115238 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 15 12:08:34.138: INFO: Creating new exec pod Sep 15 12:08:39.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2007 execpod-affinityt8tmh -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Sep 15 12:08:39.392: INFO: stderr: "I0915 12:08:39.310387 3383 log.go:181] (0xc000205b80) (0xc000142640) Create stream\nI0915 12:08:39.310446 3383 log.go:181] (0xc000205b80) (0xc000142640) Stream added, broadcasting: 1\nI0915 12:08:39.316208 3383 log.go:181] (0xc000205b80) Reply frame received for 1\nI0915 12:08:39.316243 3383 log.go:181] (0xc000205b80) (0xc000744000) Create stream\nI0915 12:08:39.316253 3383 log.go:181] (0xc000205b80) (0xc000744000) Stream added, broadcasting: 3\nI0915 12:08:39.317037 3383 log.go:181] (0xc000205b80) Reply frame received for 3\nI0915 12:08:39.317079 3383 log.go:181] (0xc000205b80) (0xc000916140) Create stream\nI0915 12:08:39.317095 3383 log.go:181] (0xc000205b80) (0xc000916140) Stream added, broadcasting: 5\nI0915 12:08:39.317778 3383 log.go:181] (0xc000205b80) Reply frame received for 5\nI0915 12:08:39.384080 3383 log.go:181] (0xc000205b80) Data frame received for 5\nI0915 12:08:39.384113 3383 log.go:181] (0xc000916140) (5) Data frame handling\nI0915 12:08:39.384202 3383 log.go:181] (0xc000916140) (5) Data frame sent\nI0915 12:08:39.384226 3383 log.go:181] (0xc000205b80) Data frame received for 5\nI0915 12:08:39.384238 3383 log.go:181] (0xc000916140) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0915 12:08:39.384267 3383 log.go:181] (0xc000916140) (5) Data frame sent\nI0915 12:08:39.384570 3383 log.go:181] (0xc000205b80) Data frame received for 3\nI0915 12:08:39.384585 3383 log.go:181] (0xc000744000) (3) Data frame handling\nI0915 12:08:39.384702 3383 log.go:181] (0xc000205b80) Data frame received for 5\nI0915 12:08:39.384728 3383 log.go:181] (0xc000916140) (5) Data frame handling\nI0915 12:08:39.387146 3383 log.go:181] (0xc000205b80) Data frame received for 1\nI0915 12:08:39.387183 3383 log.go:181] (0xc000142640) (1) Data frame handling\nI0915 12:08:39.387216 3383 log.go:181] (0xc000142640) (1) Data frame sent\nI0915 12:08:39.387265 3383 log.go:181] (0xc000205b80) (0xc000142640) Stream removed, broadcasting: 1\nI0915 12:08:39.387307 3383 log.go:181] (0xc000205b80) Go away received\nI0915 12:08:39.387685 3383 log.go:181] (0xc000205b80) (0xc000142640) Stream removed, broadcasting: 1\nI0915 12:08:39.387710 3383 log.go:181] (0xc000205b80) (0xc000744000) Stream removed, broadcasting: 3\nI0915 12:08:39.387730 3383 log.go:181] (0xc000205b80) (0xc000916140) Stream removed, broadcasting: 5\n" Sep 15 12:08:39.392: INFO: stdout: "" Sep 15 12:08:39.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2007 execpod-affinityt8tmh -- /bin/sh -x -c nc -zv -t -w 2 10.100.122.75 80' Sep 15 12:08:39.597: INFO: stderr: "I0915 12:08:39.523696 3401 log.go:181] (0xc000a60fd0) (0xc0000ad220) Create stream\nI0915 12:08:39.523745 3401 log.go:181] (0xc000a60fd0) (0xc0000ad220) Stream added, broadcasting: 1\nI0915 12:08:39.529149 3401 log.go:181] (0xc000a60fd0) Reply frame received for 1\nI0915 12:08:39.529220 3401 log.go:181] (0xc000a60fd0) (0xc0000ac1e0) Create stream\nI0915 12:08:39.529253 3401 log.go:181] (0xc000a60fd0) (0xc0000ac1e0) Stream added, broadcasting: 3\nI0915 12:08:39.530326 3401 log.go:181] (0xc000a60fd0) Reply frame received for 3\nI0915 12:08:39.530362 3401 log.go:181] (0xc000a60fd0) (0xc0000ad9a0) Create stream\nI0915 12:08:39.530372 3401 log.go:181] (0xc000a60fd0) (0xc0000ad9a0) Stream added, broadcasting: 5\nI0915 12:08:39.531225 3401 log.go:181] (0xc000a60fd0) Reply frame received for 5\nI0915 12:08:39.591621 3401 log.go:181] (0xc000a60fd0) Data frame received for 5\nI0915 12:08:39.591644 3401 log.go:181] (0xc0000ad9a0) (5) Data frame handling\nI0915 12:08:39.591658 3401 log.go:181] (0xc0000ad9a0) (5) Data frame sent\nI0915 12:08:39.591665 3401 log.go:181] (0xc000a60fd0) Data frame received for 5\nI0915 12:08:39.591671 3401 log.go:181] (0xc0000ad9a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.122.75 80\nConnection to 10.100.122.75 80 port [tcp/http] succeeded!\nI0915 12:08:39.591904 3401 log.go:181] (0xc000a60fd0) Data frame received for 3\nI0915 12:08:39.591936 3401 log.go:181] (0xc0000ac1e0) (3) Data frame handling\nI0915 12:08:39.593439 3401 log.go:181] (0xc000a60fd0) Data frame received for 1\nI0915 12:08:39.593460 3401 log.go:181] (0xc0000ad220) (1) Data frame handling\nI0915 12:08:39.593473 3401 log.go:181] (0xc0000ad220) (1) Data frame sent\nI0915 12:08:39.593485 3401 log.go:181] (0xc000a60fd0) (0xc0000ad220) Stream removed, broadcasting: 1\nI0915 12:08:39.593519 3401 log.go:181] (0xc000a60fd0) Go away received\nI0915 12:08:39.593797 3401 log.go:181] (0xc000a60fd0) (0xc0000ad220) Stream removed, broadcasting: 1\nI0915 12:08:39.593811 3401 log.go:181] (0xc000a60fd0) (0xc0000ac1e0) Stream removed, broadcasting: 3\nI0915 12:08:39.593819 3401 log.go:181] (0xc000a60fd0) (0xc0000ad9a0) Stream removed, broadcasting: 5\n" Sep 15 12:08:39.597: INFO: stdout: "" Sep 15 12:08:39.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2007 execpod-affinityt8tmh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31864' Sep 15 12:08:39.824: INFO: stderr: "I0915 12:08:39.735960 3420 log.go:181] (0xc000d82c60) (0xc000a7a3c0) Create stream\nI0915 12:08:39.736007 3420 log.go:181] (0xc000d82c60) (0xc000a7a3c0) Stream added, broadcasting: 1\nI0915 12:08:39.739375 3420 log.go:181] (0xc000d82c60) Reply frame received for 1\nI0915 12:08:39.739434 3420 log.go:181] (0xc000d82c60) (0xc00063a000) Create stream\nI0915 12:08:39.739486 3420 log.go:181] (0xc000d82c60) (0xc00063a000) Stream added, broadcasting: 3\nI0915 12:08:39.740667 3420 log.go:181] (0xc000d82c60) Reply frame received for 3\nI0915 12:08:39.740703 3420 log.go:181] (0xc000d82c60) (0xc00073c280) Create stream\nI0915 12:08:39.740714 3420 log.go:181] (0xc000d82c60) (0xc00073c280) Stream added, broadcasting: 5\nI0915 12:08:39.741932 3420 log.go:181] (0xc000d82c60) Reply frame received for 5\nI0915 12:08:39.805741 3420 log.go:181] (0xc000d82c60) Data frame received for 3\nI0915 12:08:39.805768 3420 log.go:181] (0xc00063a000) (3) Data frame handling\nI0915 12:08:39.805814 3420 log.go:181] (0xc000d82c60) Data frame received for 5\nI0915 12:08:39.805847 3420 log.go:181] (0xc00073c280) (5) Data frame handling\nI0915 12:08:39.805867 3420 log.go:181] (0xc00073c280) (5) Data frame sent\nI0915 12:08:39.805882 3420 log.go:181] (0xc000d82c60) Data frame received for 5\nI0915 12:08:39.805902 3420 log.go:181] (0xc00073c280) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31864\nConnection to 172.18.0.11 31864 port [tcp/31864] succeeded!\nI0915 12:08:39.807277 3420 log.go:181] (0xc000d82c60) Data frame received for 1\nI0915 12:08:39.807306 3420 log.go:181] (0xc000a7a3c0) (1) Data frame handling\nI0915 12:08:39.807320 3420 log.go:181] (0xc000a7a3c0) (1) Data frame sent\nI0915 12:08:39.807333 3420 log.go:181] (0xc000d82c60) (0xc000a7a3c0) Stream removed, broadcasting: 1\nI0915 12:08:39.807354 3420 log.go:181] (0xc000d82c60) Go away received\nI0915 12:08:39.807827 3420 log.go:181] (0xc000d82c60) (0xc000a7a3c0) Stream removed, broadcasting: 1\nI0915 12:08:39.807851 3420 log.go:181] (0xc000d82c60) (0xc00063a000) Stream removed, broadcasting: 3\nI0915 12:08:39.807864 3420 log.go:181] (0xc000d82c60) (0xc00073c280) Stream removed, broadcasting: 5\n" Sep 15 12:08:39.824: INFO: stdout: "" Sep 15 12:08:39.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2007 execpod-affinityt8tmh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31864' Sep 15 12:08:40.011: INFO: stderr: "I0915 12:08:39.943824 3438 log.go:181] (0xc000d0f600) (0xc000832aa0) Create stream\nI0915 12:08:39.943882 3438 log.go:181] (0xc000d0f600) (0xc000832aa0) Stream added, broadcasting: 1\nI0915 12:08:39.951255 3438 log.go:181] (0xc000d0f600) Reply frame received for 1\nI0915 12:08:39.951314 3438 log.go:181] (0xc000d0f600) (0xc000be20a0) Create stream\nI0915 12:08:39.951336 3438 log.go:181] (0xc000d0f600) (0xc000be20a0) Stream added, broadcasting: 3\nI0915 12:08:39.952394 3438 log.go:181] (0xc000d0f600) Reply frame received for 3\nI0915 12:08:39.952420 3438 log.go:181] (0xc000d0f600) (0xc000832000) Create stream\nI0915 12:08:39.952429 3438 log.go:181] (0xc000d0f600) (0xc000832000) Stream added, broadcasting: 5\nI0915 12:08:39.953324 3438 log.go:181] (0xc000d0f600) Reply frame received for 5\nI0915 12:08:40.004057 3438 log.go:181] (0xc000d0f600) Data frame received for 5\nI0915 12:08:40.004095 3438 log.go:181] (0xc000832000) (5) Data frame handling\nI0915 12:08:40.004109 3438 log.go:181] (0xc000832000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 31864\nConnection to 172.18.0.12 31864 port [tcp/31864] succeeded!\nI0915 12:08:40.004125 3438 log.go:181] (0xc000d0f600) Data frame received for 3\nI0915 12:08:40.004282 3438 log.go:181] (0xc000be20a0) (3) Data frame handling\nI0915 12:08:40.004358 3438 log.go:181] (0xc000d0f600) Data frame received for 5\nI0915 12:08:40.004399 3438 log.go:181] (0xc000832000) (5) Data frame handling\nI0915 12:08:40.006275 3438 log.go:181] (0xc000d0f600) Data frame received for 1\nI0915 12:08:40.006312 3438 log.go:181] (0xc000832aa0) (1) Data frame handling\nI0915 12:08:40.006360 3438 log.go:181] (0xc000832aa0) (1) Data frame sent\nI0915 12:08:40.006392 3438 log.go:181] (0xc000d0f600) (0xc000832aa0) Stream removed, broadcasting: 1\nI0915 12:08:40.006428 3438 log.go:181] (0xc000d0f600) Go away received\nI0915 12:08:40.006809 3438 log.go:181] (0xc000d0f600) (0xc000832aa0) Stream removed, broadcasting: 1\nI0915 12:08:40.006843 3438 log.go:181] (0xc000d0f600) (0xc000be20a0) Stream removed, broadcasting: 3\nI0915 12:08:40.006860 3438 log.go:181] (0xc000d0f600) (0xc000832000) Stream removed, broadcasting: 5\n" Sep 15 12:08:40.011: INFO: stdout: "" Sep 15 12:08:40.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2007 execpod-affinityt8tmh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31864/ ; done' Sep 15 12:08:40.334: INFO: stderr: "I0915 12:08:40.158021 3456 log.go:181] (0xc000f2cf20) (0xc00031d5e0) Create stream\nI0915 12:08:40.158091 3456 log.go:181] (0xc000f2cf20) (0xc00031d5e0) Stream added, broadcasting: 1\nI0915 12:08:40.163149 3456 log.go:181] (0xc000f2cf20) Reply frame received for 1\nI0915 12:08:40.163196 3456 log.go:181] (0xc000f2cf20) (0xc00031c140) Create stream\nI0915 12:08:40.163214 3456 log.go:181] (0xc000f2cf20) (0xc00031c140) Stream added, broadcasting: 3\nI0915 12:08:40.164217 3456 log.go:181] (0xc000f2cf20) Reply frame received for 3\nI0915 12:08:40.164267 3456 log.go:181] (0xc000f2cf20) (0xc000970320) Create stream\nI0915 12:08:40.164280 3456 log.go:181] (0xc000f2cf20) (0xc000970320) Stream added, broadcasting: 5\nI0915 12:08:40.165160 3456 log.go:181] (0xc000f2cf20) Reply frame received for 5\nI0915 12:08:40.236518 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.236578 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.236596 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.236623 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.236636 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.236652 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.241381 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.241408 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.241428 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.242768 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.242797 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.242812 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.242840 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.242853 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.242865 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.247427 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.247458 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.247484 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.248497 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.248513 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.248522 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.248542 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.248566 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.248582 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.253090 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.253103 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.253119 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.253783 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.253795 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.253802 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.253822 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.253844 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.253864 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.258696 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.258719 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.258737 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.259367 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.259393 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.259403 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.259415 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.259438 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.259453 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.266401 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.266423 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.266439 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.267609 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.267648 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.267677 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.267700 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.267713 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.267737 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.272067 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.272086 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.272097 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.272726 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.272746 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.272767 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.272778 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.272786 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.272794 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.280760 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.280791 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.280812 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.284598 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.284629 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.284641 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.284660 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.284669 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.284678 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.287158 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.287182 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.287197 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.287487 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.287512 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.287524 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.287538 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.287547 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.287555 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.293066 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.293086 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.293101 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.293718 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.293744 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.293760 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.293778 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.293793 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.293803 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.296823 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.296849 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.296870 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.297449 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.297471 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.297481 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.297494 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.297501 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.297508 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.301772 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.301792 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.301811 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.302176 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.302190 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.302201 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.302229 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.302259 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.302281 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.309014 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.309030 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.309037 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.309894 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.309924 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.309937 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.309956 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.309971 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.309981 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.313866 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.313885 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.313902 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.314451 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.314474 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.314483 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.314497 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.314503 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.314510 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.318421 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.318444 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.318469 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.319121 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.319141 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.319153 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.319173 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.319182 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.319193 3456 log.go:181] (0xc000970320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.323436 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.323456 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.323473 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.323923 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.323949 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.324013 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.324029 3456 log.go:181] (0xc000970320) (5) Data frame sent\nI0915 12:08:40.324038 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.324047 3456 log.go:181] (0xc000970320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.324070 3456 log.go:181] (0xc000970320) (5) Data frame sent\nI0915 12:08:40.324082 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.324093 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.327582 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.327595 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.327602 3456 log.go:181] (0xc00031c140) (3) Data frame sent\nI0915 12:08:40.328371 3456 log.go:181] (0xc000f2cf20) Data frame received for 3\nI0915 12:08:40.328385 3456 log.go:181] (0xc00031c140) (3) Data frame handling\nI0915 12:08:40.328585 3456 log.go:181] (0xc000f2cf20) Data frame received for 5\nI0915 12:08:40.328618 3456 log.go:181] (0xc000970320) (5) Data frame handling\nI0915 12:08:40.330058 3456 log.go:181] (0xc000f2cf20) Data frame received for 1\nI0915 12:08:40.330090 3456 log.go:181] (0xc00031d5e0) (1) Data frame handling\nI0915 12:08:40.330118 3456 log.go:181] (0xc00031d5e0) (1) Data frame sent\nI0915 12:08:40.330145 3456 log.go:181] (0xc000f2cf20) (0xc00031d5e0) Stream removed, broadcasting: 1\nI0915 12:08:40.330190 3456 log.go:181] (0xc000f2cf20) Go away received\nI0915 12:08:40.330440 3456 log.go:181] (0xc000f2cf20) (0xc00031d5e0) Stream removed, broadcasting: 1\nI0915 12:08:40.330453 3456 log.go:181] (0xc000f2cf20) (0xc00031c140) Stream removed, broadcasting: 3\nI0915 12:08:40.330459 3456 log.go:181] (0xc000f2cf20) (0xc000970320) Stream removed, broadcasting: 5\n" Sep 15 12:08:40.334: INFO: stdout: "\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm\naffinity-nodeport-timeout-g7xrm" Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Received response from host: affinity-nodeport-timeout-g7xrm Sep 15 12:08:40.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2007 execpod-affinityt8tmh -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:31864/' Sep 15 12:08:40.555: INFO: stderr: "I0915 12:08:40.474045 3474 log.go:181] (0xc000143970) (0xc000d90a00) Create stream\nI0915 12:08:40.474461 3474 log.go:181] (0xc000143970) (0xc000d90a00) Stream added, broadcasting: 1\nI0915 12:08:40.479218 3474 log.go:181] (0xc000143970) Reply frame received for 1\nI0915 12:08:40.479322 3474 log.go:181] (0xc000143970) (0xc000d90aa0) Create stream\nI0915 12:08:40.479339 3474 log.go:181] (0xc000143970) (0xc000d90aa0) Stream added, broadcasting: 3\nI0915 12:08:40.480473 3474 log.go:181] (0xc000143970) Reply frame received for 3\nI0915 12:08:40.480497 3474 log.go:181] (0xc000143970) (0xc000d90b40) Create stream\nI0915 12:08:40.480507 3474 log.go:181] (0xc000143970) (0xc000d90b40) Stream added, broadcasting: 5\nI0915 12:08:40.481510 3474 log.go:181] (0xc000143970) Reply frame received for 5\nI0915 12:08:40.546408 3474 log.go:181] (0xc000143970) Data frame received for 5\nI0915 12:08:40.546442 3474 log.go:181] (0xc000d90b40) (5) Data frame handling\nI0915 12:08:40.546462 3474 log.go:181] (0xc000d90b40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:40.548744 3474 log.go:181] (0xc000143970) Data frame received for 3\nI0915 12:08:40.548769 3474 log.go:181] (0xc000d90aa0) (3) Data frame handling\nI0915 12:08:40.548796 3474 log.go:181] (0xc000d90aa0) (3) Data frame sent\nI0915 12:08:40.549095 3474 log.go:181] (0xc000143970) Data frame received for 3\nI0915 12:08:40.549122 3474 log.go:181] (0xc000d90aa0) (3) Data frame handling\nI0915 12:08:40.549294 3474 log.go:181] (0xc000143970) Data frame received for 5\nI0915 12:08:40.549305 3474 log.go:181] (0xc000d90b40) (5) Data frame handling\nI0915 12:08:40.550675 3474 log.go:181] (0xc000143970) Data frame received for 1\nI0915 12:08:40.550689 3474 log.go:181] (0xc000d90a00) (1) Data frame handling\nI0915 12:08:40.550701 3474 log.go:181] (0xc000d90a00) (1) Data frame sent\nI0915 12:08:40.550814 3474 log.go:181] (0xc000143970) (0xc000d90a00) Stream removed, broadcasting: 1\nI0915 12:08:40.550876 3474 log.go:181] (0xc000143970) Go away received\nI0915 12:08:40.551261 3474 log.go:181] (0xc000143970) (0xc000d90a00) Stream removed, broadcasting: 1\nI0915 12:08:40.551280 3474 log.go:181] (0xc000143970) (0xc000d90aa0) Stream removed, broadcasting: 3\nI0915 12:08:40.551290 3474 log.go:181] (0xc000143970) (0xc000d90b40) Stream removed, broadcasting: 5\n" Sep 15 12:08:40.555: INFO: stdout: "affinity-nodeport-timeout-g7xrm" Sep 15 12:08:55.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2007 execpod-affinityt8tmh -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:31864/' Sep 15 12:08:55.801: INFO: stderr: "I0915 12:08:55.706110 3492 log.go:181] (0xc000a471e0) (0xc00072e640) Create stream\nI0915 12:08:55.706175 3492 log.go:181] (0xc000a471e0) (0xc00072e640) Stream added, broadcasting: 1\nI0915 12:08:55.711679 3492 log.go:181] (0xc000a471e0) Reply frame received for 1\nI0915 12:08:55.711730 3492 log.go:181] (0xc000a471e0) (0xc0008a6000) Create stream\nI0915 12:08:55.711745 3492 log.go:181] (0xc000a471e0) (0xc0008a6000) Stream added, broadcasting: 3\nI0915 12:08:55.712926 3492 log.go:181] (0xc000a471e0) Reply frame received for 3\nI0915 12:08:55.712966 3492 log.go:181] (0xc000a471e0) (0xc00062cfa0) Create stream\nI0915 12:08:55.712979 3492 log.go:181] (0xc000a471e0) (0xc00062cfa0) Stream added, broadcasting: 5\nI0915 12:08:55.713948 3492 log.go:181] (0xc000a471e0) Reply frame received for 5\nI0915 12:08:55.788680 3492 log.go:181] (0xc000a471e0) Data frame received for 5\nI0915 12:08:55.788705 3492 log.go:181] (0xc00062cfa0) (5) Data frame handling\nI0915 12:08:55.788718 3492 log.go:181] (0xc00062cfa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:08:55.794253 3492 log.go:181] (0xc000a471e0) Data frame received for 3\nI0915 12:08:55.794282 3492 log.go:181] (0xc0008a6000) (3) Data frame handling\nI0915 12:08:55.794304 3492 log.go:181] (0xc0008a6000) (3) Data frame sent\nI0915 12:08:55.794683 3492 log.go:181] (0xc000a471e0) Data frame received for 5\nI0915 12:08:55.794715 3492 log.go:181] (0xc00062cfa0) (5) Data frame handling\nI0915 12:08:55.794917 3492 log.go:181] (0xc000a471e0) Data frame received for 3\nI0915 12:08:55.794940 3492 log.go:181] (0xc0008a6000) (3) Data frame handling\nI0915 12:08:55.796582 3492 log.go:181] (0xc000a471e0) Data frame received for 1\nI0915 12:08:55.796623 3492 log.go:181] (0xc00072e640) (1) Data frame handling\nI0915 12:08:55.796653 3492 log.go:181] (0xc00072e640) (1) Data frame sent\nI0915 12:08:55.796700 3492 log.go:181] (0xc000a471e0) (0xc00072e640) Stream removed, broadcasting: 1\nI0915 12:08:55.796747 3492 log.go:181] (0xc000a471e0) Go away received\nI0915 12:08:55.797146 3492 log.go:181] (0xc000a471e0) (0xc00072e640) Stream removed, broadcasting: 1\nI0915 12:08:55.797163 3492 log.go:181] (0xc000a471e0) (0xc0008a6000) Stream removed, broadcasting: 3\nI0915 12:08:55.797172 3492 log.go:181] (0xc000a471e0) (0xc00062cfa0) Stream removed, broadcasting: 5\n" Sep 15 12:08:55.801: INFO: stdout: "affinity-nodeport-timeout-g7xrm" Sep 15 12:09:10.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2007 execpod-affinityt8tmh -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:31864/' Sep 15 12:09:11.012: INFO: stderr: "I0915 12:09:10.937731 3510 log.go:181] (0xc00003a0b0) (0xc0009885a0) Create stream\nI0915 12:09:10.937775 3510 log.go:181] (0xc00003a0b0) (0xc0009885a0) Stream added, broadcasting: 1\nI0915 12:09:10.939385 3510 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0915 12:09:10.939454 3510 log.go:181] (0xc00003a0b0) (0xc0009aea00) Create stream\nI0915 12:09:10.939467 3510 log.go:181] (0xc00003a0b0) (0xc0009aea00) Stream added, broadcasting: 3\nI0915 12:09:10.940483 3510 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0915 12:09:10.940521 3510 log.go:181] (0xc00003a0b0) (0xc000988b40) Create stream\nI0915 12:09:10.940532 3510 log.go:181] (0xc00003a0b0) (0xc000988b40) Stream added, broadcasting: 5\nI0915 12:09:10.941437 3510 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0915 12:09:11.002177 3510 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0915 12:09:11.002206 3510 log.go:181] (0xc000988b40) (5) Data frame handling\nI0915 12:09:11.002226 3510 log.go:181] (0xc000988b40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31864/\nI0915 12:09:11.004661 3510 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0915 12:09:11.004686 3510 log.go:181] (0xc0009aea00) (3) Data frame handling\nI0915 12:09:11.004706 3510 log.go:181] (0xc0009aea00) (3) Data frame sent\nI0915 12:09:11.005550 3510 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0915 12:09:11.005589 3510 log.go:181] (0xc0009aea00) (3) Data frame handling\nI0915 12:09:11.005630 3510 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0915 12:09:11.005657 3510 log.go:181] (0xc000988b40) (5) Data frame handling\nI0915 12:09:11.007493 3510 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0915 12:09:11.007513 3510 log.go:181] (0xc0009885a0) (1) Data frame handling\nI0915 12:09:11.007530 3510 log.go:181] (0xc0009885a0) (1) Data frame sent\nI0915 12:09:11.007553 3510 log.go:181] (0xc00003a0b0) (0xc0009885a0) Stream removed, broadcasting: 1\nI0915 12:09:11.007577 3510 log.go:181] (0xc00003a0b0) Go away received\nI0915 12:09:11.008029 3510 log.go:181] (0xc00003a0b0) (0xc0009885a0) Stream removed, broadcasting: 1\nI0915 12:09:11.008056 3510 log.go:181] (0xc00003a0b0) (0xc0009aea00) Stream removed, broadcasting: 3\nI0915 12:09:11.008077 3510 log.go:181] (0xc00003a0b0) (0xc000988b40) Stream removed, broadcasting: 5\n" Sep 15 12:09:11.013: INFO: stdout: "affinity-nodeport-timeout-4nhkt" Sep 15 12:09:11.013: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-2007, will wait for the garbage collector to delete the pods Sep 15 12:09:11.111: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 8.980467ms Sep 15 12:09:11.712: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.204653ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:09:23.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2007" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:73.261 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":295,"skipped":4789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:09:23.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-ddcb1bee-46a5-43cd-a285-affac223fffd STEP: Creating secret with name secret-projected-all-test-volume-ded971f6-dc5b-4c3a-b97e-161674c27e35 STEP: Creating a pod to test Check all projections for projected volume plugin Sep 15 12:09:23.957: INFO: Waiting up to 5m0s for pod "projected-volume-f6ee84ae-b2c0-462c-8901-0587781a20c0" in namespace "projected-880" to be "Succeeded or Failed" Sep 15 12:09:23.965: INFO: Pod "projected-volume-f6ee84ae-b2c0-462c-8901-0587781a20c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436118ms Sep 15 12:09:25.969: INFO: Pod "projected-volume-f6ee84ae-b2c0-462c-8901-0587781a20c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011786692s Sep 15 12:09:27.975: INFO: Pod "projected-volume-f6ee84ae-b2c0-462c-8901-0587781a20c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018112652s STEP: Saw pod success Sep 15 12:09:27.975: INFO: Pod "projected-volume-f6ee84ae-b2c0-462c-8901-0587781a20c0" satisfied condition "Succeeded or Failed" Sep 15 12:09:27.978: INFO: Trying to get logs from node kali-worker pod projected-volume-f6ee84ae-b2c0-462c-8901-0587781a20c0 container projected-all-volume-test: STEP: delete the pod Sep 15 12:09:27.997: INFO: Waiting for pod projected-volume-f6ee84ae-b2c0-462c-8901-0587781a20c0 to disappear Sep 15 12:09:28.002: INFO: Pod projected-volume-f6ee84ae-b2c0-462c-8901-0587781a20c0 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:09:28.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-880" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":296,"skipped":4846,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:09:28.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:09:39.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9576" for this suite. • [SLOW TEST:11.160 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":297,"skipped":4847,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:09:39.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 15 12:09:47.335: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 15 12:09:47.369: INFO: Pod pod-with-prestop-http-hook still exists Sep 15 12:09:49.369: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 15 12:09:49.374: INFO: Pod pod-with-prestop-http-hook still exists Sep 15 12:09:51.370: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 15 12:09:51.375: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:09:51.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7809" for this suite. • [SLOW TEST:12.233 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":298,"skipped":4873,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:09:51.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 15 12:09:51.519: INFO: Waiting up to 5m0s for pod "pod-6cab8a6c-4301-4160-af5e-995446130901" in namespace "emptydir-6514" to be "Succeeded or Failed" Sep 15 12:09:51.530: INFO: Pod "pod-6cab8a6c-4301-4160-af5e-995446130901": Phase="Pending", Reason="", readiness=false. Elapsed: 10.802942ms Sep 15 12:09:53.537: INFO: Pod "pod-6cab8a6c-4301-4160-af5e-995446130901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018656379s Sep 15 12:09:55.543: INFO: Pod "pod-6cab8a6c-4301-4160-af5e-995446130901": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024094545s STEP: Saw pod success Sep 15 12:09:55.543: INFO: Pod "pod-6cab8a6c-4301-4160-af5e-995446130901" satisfied condition "Succeeded or Failed" Sep 15 12:09:55.546: INFO: Trying to get logs from node kali-worker2 pod pod-6cab8a6c-4301-4160-af5e-995446130901 container test-container: STEP: delete the pod Sep 15 12:09:55.561: INFO: Waiting for pod pod-6cab8a6c-4301-4160-af5e-995446130901 to disappear Sep 15 12:09:55.582: INFO: Pod pod-6cab8a6c-4301-4160-af5e-995446130901 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:09:55.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6514" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":299,"skipped":4874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:09:55.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-48e7cb87-3fb9-4d31-9f35-3587d9b7706f in namespace container-probe-547 Sep 15 12:09:59.654: INFO: Started pod liveness-48e7cb87-3fb9-4d31-9f35-3587d9b7706f in namespace container-probe-547 STEP: checking the pod's current state and verifying that restartCount is present Sep 15 12:09:59.657: INFO: Initial restart count of pod liveness-48e7cb87-3fb9-4d31-9f35-3587d9b7706f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:14:00.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-547" for this suite. • [SLOW TEST:244.855 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4903,"failed":0} [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:14:00.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 15 12:14:00.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-548dedfc-3f85-409f-a0c1-be3db25e9869" in namespace "projected-1053" to be "Succeeded or Failed" Sep 15 12:14:00.751: INFO: Pod "downwardapi-volume-548dedfc-3f85-409f-a0c1-be3db25e9869": Phase="Pending", Reason="", readiness=false. Elapsed: 15.412983ms Sep 15 12:14:02.755: INFO: Pod "downwardapi-volume-548dedfc-3f85-409f-a0c1-be3db25e9869": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020292712s Sep 15 12:14:04.760: INFO: Pod "downwardapi-volume-548dedfc-3f85-409f-a0c1-be3db25e9869": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025228474s STEP: Saw pod success Sep 15 12:14:04.760: INFO: Pod "downwardapi-volume-548dedfc-3f85-409f-a0c1-be3db25e9869" satisfied condition "Succeeded or Failed" Sep 15 12:14:04.763: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-548dedfc-3f85-409f-a0c1-be3db25e9869 container client-container: STEP: delete the pod Sep 15 12:14:04.820: INFO: Waiting for pod downwardapi-volume-548dedfc-3f85-409f-a0c1-be3db25e9869 to disappear Sep 15 12:14:04.832: INFO: Pod downwardapi-volume-548dedfc-3f85-409f-a0c1-be3db25e9869 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:14:04.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1053" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":301,"skipped":4903,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:14:04.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6929.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6929.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 15 12:14:11.049: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:11.052: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:11.055: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:11.059: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:11.069: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:11.072: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:11.076: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:11.079: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:11.090: INFO: Lookups using dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local] Sep 15 12:14:16.095: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:16.098: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:16.102: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:16.106: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:16.116: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:16.120: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:16.123: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:16.126: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:16.133: INFO: Lookups using dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local] Sep 15 12:14:21.095: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:21.098: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:21.101: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:21.104: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:21.114: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:21.117: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:21.121: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:21.123: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:21.129: INFO: Lookups using dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local] Sep 15 12:14:26.095: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:26.099: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:26.102: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:26.106: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:26.116: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:26.119: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:26.122: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:26.126: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:26.133: INFO: Lookups using dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local] Sep 15 12:14:31.095: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:31.098: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:31.102: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:31.105: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:31.115: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:31.119: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:31.122: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:31.125: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:31.132: INFO: Lookups using dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local] Sep 15 12:14:36.103: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:36.105: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:36.107: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:36.109: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:36.116: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:36.118: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:36.121: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:36.124: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local from pod dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b: the server could not find the requested resource (get pods dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b) Sep 15 12:14:36.129: INFO: Lookups using dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6929.svc.cluster.local jessie_udp@dns-test-service-2.dns-6929.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6929.svc.cluster.local] Sep 15 12:14:41.132: INFO: DNS probes using dns-6929/dns-test-0460c15d-2be4-4443-9a5c-9c5d29a1b57b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:14:41.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6929" for this suite. • [SLOW TEST:37.025 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":302,"skipped":4912,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 15 12:14:41.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 15 12:14:46.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7123" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":303,"skipped":4921,"failed":0} SSSSSSSSSep 15 12:14:46.220: INFO: Running AfterSuite actions on all nodes Sep 15 12:14:46.220: INFO: Running AfterSuite actions on node 1 Sep 15 12:14:46.220: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 6177.977 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS